Click on the links in the side bar to read the parts that interest you the most.
Click here to go back to podcast page.
Below is the full transcript the full length podcast.
Setting the stage
The piece is called, “What is Your Non-Machine Premium?” And that’s a term that I came up with for people being able to explain, “How am I better than an AI?” And so there’s four key things where humans can future-proof themselves.
JANE
CORTNEY Harding is an AI and VR strategist and speaker. Her book, The Spatial Race: How to Prepare for Our Future in the Meta-Physical World, just came out and the subtitle of the book leads us to a compelling vision of the future, but of our present today as well. So, Cortney, the last time we talked was a year and three months ago. When I looked it up, it was much longer than I thought. It seemed like it was more recent, and so I bet a lot of things have changed. I want to start with something you said. I’m just going to read it to you.
The weird middle territory
I think you ended the conversation with, “We’re at a place where the last thing has ended. The next thing hasn’t quite taken off and we’re just sitting in this weird middle space that feels very uncomfortable and we want to figure out what the next path forward is.” You’re nodding your head. So, you remember that idea?
CORTNEY
I remember that idea and now I have a whole entire book that is basically that idea.
JANE
Yes, your book, which I have read.
CORTNEY
Oh good.
JANE
Do you think we are still in the middle territory or do you think things are a little clearer now?
CORTNEY
I think things are somewhat clearer, but I think we’re still in this middle territory. AI in the last year and a half has been transformative for a lot of things. It has not nearly reached its potential. So, things have changed, but we have still not accepted a lot of those changes and adapted to those changes and embraced those changes. So, I think we’re still further along than we were, but still at an early stage. Can I curse on this?
JANE
You can.
Enshittification from Cory Doctorow is what’s happening
Okay. It’s a book title. I promise I’m not just being gratuitous. So, Cory Doctorow, who is fantastic, has a book called… You’re laughing because I’m sure you’ve heard the title. … Enshittification and Doctorow’s thesis in this book, which is fantastic and worth reading, is basically that we were promised all of these amazing technologies. We had a glimpse of them in the 2010s when the market was flooded with money and VCs were subsidizing everything and an Uber cost $5 and everything was amazing. Now they have curdled into something that is very unpleasant.
So, if you look at social media, for example, when Facebook first launched, it was this really fun place where you could catch up with your friends from high school and post a photo of a party you went to, and now it’s just memes and slop and conspiracy theories. Twitter was arguably instrumental in revolutions across parts of the world, a force for good and a force for communication and also just for fun. I was in Twitter communities about arts and music and there’s a Simpsons mega thread that I was in that I’ve saved because it’s so fun and then it curdled into whatever it is now. Instagram was a fun place to post pictures of your cat. I think Instagram of all the social media platforms is still the most fun, but it’s definitely less fun and more professional.
Then you have services like Uber and the delivery services and what those services have done is they have displaced the incumbents and then they too have become terrible. So, with Uber Eats, for example, in New York City, there is no customer service. If someone steals your food, they’ll refund you, but they won’t actually do anything else. The drivers are incredibly dangerous on the streets of New York and no one cracks down on it. People are driving motorcycles on the sidewalk and it’s just a shrug. The problem with that is now restaurants that would previously employ individual delivery drivers dedicated to that restaurant now don’t do that anymore, right? The economics of that don’t make sense.
So, now you basically have no other option than these two terrible experiences, Uber or DoorDash. So, all of that has destroyed everything. So, basically, we’re looking at this world that we were promised and the promise was not delivered. New technologies like AI are fascinating and growing. The problem is they are not delivering maybe to the extent we want them to deliver. They have not yet fulfilled the promise that a lot of people have talked about. The adoption has been so uneven that people are frustrated. So, places will say, “Oh, we have this great AI customer service,” and it’s not great. It’s terrible, right? It could be great if they actually invested in it and were serious about it, but it’s not great. They’re not incentivized to make it great.
So, we’re at this place where for me anyway, I get very frustrated because I see how things could be better and how things could change. It seems like a lot of people either don’t understand that or they don’t care. So, that’s I think the constant frustration point. A lot of people, myself included, are at we know this could be better. We want to make it better. Yet at the top, people are just like, “Eh, who cares?”
JANE
You know what I find interesting? Well, first of all, I’d like to recommend anyone who wants to learn more about that to read your book because I like very much the way your book goes through the different phases and then becomes practical and then talks about strategy and benefits and so on. It’s extremely well-organized. What I wanted to ask you to do because we are not going to go into a lot of detail about the nitty-gritty, could you explain just briefly the differences… I think we all know what the first web was like, and most of us know what Web 2 was like. Now, Web3, the metaverse, and what you call the metaphysical world, those three are quite different. Could you talk about each one just briefly to give people who don’t know about it a sense of what it’s about?
The meta present with AI and VR used intentionally rather than integrated
Sure. So, where we’re at right now is the meta present. The meta present is right now analogous to Web 1.0 in terms of the development of the next wave of how we interact with technology. So, Web 1.0 was a place that you visited. So, I’m old enough to remember in the 1990s having dial-up internet, having AOL, and having a computer room in my parents’ home. When I wanted to do computer, I went to the room that the computer was and I dialed up. I did computer for an hour or two. My mom would come and say, “Hey, get off the computer. I need to call your aunt.” I would hang up the computer and then I would go live my life and I would do computer a few times a week.
There was a period in probably the late ’90s or early 2000s when my computer broke and it took me three weeks to actually get it fixed. So, I just don’t go online for three weeks and I didn’t really miss much. I had a bunch of unread emails, but most of them were junk or friends of mine saying, “How’s your summer going?” So it was a place that we visited and you had to be very intentional about visiting. So, that’s analogous now to AI, VR, all of that stuff. So, to do VR, to do head mounted devices, you put on a device, you do it probably for a specific thing and then you take it off and you plug it in and you go away. With AI, you prompt ChatGPT or you use Sora to create a video.
The metaverse is one-to-many
You are actively participating in this world, but you don’t live there. It’s not part of your day-to-day life as much. So, that’s analogous to where we are now. The second phase is what I call the metaverse because even though that term has become fallen out of favor and become loaded more recently and what we conceived of as the metaverse, which is these embodied digital worlds, didn’t really take off. There’s a lot of reasons for that. I think it’s still going to take off in the future. We see a lot more young people participating in it. If Web 1.0 and the meta present are one-to-one, right, you are communicating with one other person or a group of other people, then the second phrase, the metaverse is one-to-many.
So, you are broadcasting and you might be getting back, but you’re not necessarily consumed with it in all of your life. So, there was a mid-period in the early 2000s into the 2010s where people had laptops, people worked online all day, they didn’t have smartphones, right? Or smartphones were new or smartphones were not the dominant force in our lives. So, we would use your smartphone for something and then you would put it away. Now, and this is the current present and this is going to be the metaphysical world, except it’s different devices and different technologies, we live online.
Everything is connected at this point. We read on our phones. We do business on our phones. If you go to a concert, your QR code is on your phone. It’s actually harder now to print out concert tickets than it is to just have someone scan a QR code. We’ve moved in that direction where life offline is the exception and life online is the rule. So, as we move into the metaphysical world, life powered by AI and experience through head mounted devices of different sorts will just become how we experience the entire world.
Unbalanced and unfair in many parts of the world
What’s interesting as you were talking, and you talk about it in your book, is that it’s very unbalanced in the world, very unfair. It’s something you talk about quite a lot, that there are people in countries and places where they can’t begin to live like this for different reasons. Can you talk about that a little bit?
CORTNEY
Yeah, absolutely. So, there are countries that have their own internet, China being the biggest one. There are countries that have no internet, North Korea and Turkmenistan being others. There are countries where the government will shut off the internet. This happens in India, not infrequently. It’s happened in other markets. There is also just a lack of connectivity in the world. One thing that I think could potentially hamstring a lot of these devices has nothing to do with the device. It has to do with connectivity. This is a huge problem in the United States. There are parts of the United States where there’s no cell service, there just isn’t, right? These are within two hours of a major city.
JANE
I can’t imagine that. I’m American, but I’ve been in France for so many years and France is very well-connected throughout the entire country.
CORTNEY
It’s very interesting because Europe does lag in certain areas, but where a lot of Europe and even Eastern Europe does well is connectivity. I have been on dirt roads in Armenia, dirt roads with no people and had better connectivity than I’ve had on the Taconic State Parkway in New York.
JANE
Really?
Infrastructure and connectivity issues
And that is an issue because that’s the foundation of these devices. It’s also a safety issue. If you are in an accident in a place that has no connectivity, how will you get help? There was a terrible case maybe 10 years ago now, where a young Google engineer and his family were caught in a snowstorm. This was in California. They had gone skiing and he tried to go get help. He passed away. He froze to death because there was no cell signal. So, it’s not an issue of, “Oh, I need to be able to scroll my Instagram whenever I feel like it.” It’s like, no, people will die if we don’t fix this connectivity issue. Unfortunately, there doesn’t seem to be an incentive to do that. Now, Starlink is a possibility, right?
There are possibilities, but the fact is there are these deeper ingrained infrastructure issues that we need to work on to make this future possible. I think that’s something that a lot of people aren’t really talking about. Certainly, there’s infrastructure issues around data centers, water usage for AI. That’s something that definitely needs work and needs a solution. My great fear is that all of this technology which has the power to be transformative and the power to make people’s lives much better will just go the way of the internet right now, which is it’s not a great experience. Infrastructure is not sexy, right? It’s not sexy to talk about how in New York City where I live, the trains flood when it rains.
Every time there’s a rainstorm in New York City and they’re happening more frequently because of climate change, people will start posting floods inside the subway tunnel. Now, this is the wealthiest city on earth, and yet we cannot maintain our subway as such that they don’t flood if there’s a rainstorm. That’s just one example of the basic bones of everything are just not where they should be. Where that gets interesting is so Waymo is coming to New York. They have just started doing pilots in the city and people have all sorts of big feelings about it, but the fact is Waymo is happening. It’s already all over San Francisco, Los Angeles, Phoenix. It is coming to other markets. Waymo is going to happen. Whether anyone likes it or not is somewhat irrelevant.
So, all of a sudden, once you have pretty cheap Waymo’s all over New York City, well, then why would you take the subway? Now the subway has upsides, but the idea is not like, “Oh, you should take the subway because it’s power to the people.” It’s like why don’t we make the subway better than Waymo’s? I think that’s the fundamental thing that people miss is this technology is happening. So, then how do you produce something that is better? How do you keep doing this continuous… My friend has a book called The Upward Spiral. How do you produce a continuous upward spiral versus this bifurcated system where Waymo’s are obviously better and the subway just collapses?
JANE
Do you think underlying all these things we’re talking about is the question of money?
CORTNEY
Oh, for sure. In the US, people don’t want to pay more taxes for the subway. It’s a different mindset here than it is in European countries, especially Nordic countries. It’s a challenge that way. It’s a challenge because there is a huge bifurcation. So, again, I live in the US, so it’s from a US perspective, but the wealthiest group of Americans are doing great right now. The stock market is booming. The Magnificent 7 tech companies are just going up and up and up and up in value.
There’s all this amazing technology, but for the bottom percentage of Americans, for the huge majority of Americans, they are really struggling because this technology has not made their lives appreciably better. In some cases, it’s made their lives appreciably worse. So, I think there is a sense of people are strapped for cash, they don’t want to pay more. They don’t see the money going to places that actually are helping them. I think really that is at the heart of this angst about emerging technology is people feeling like they will be displaced and they won’t have any other options.
Need to upskill and reskill workers to prepare for job displacement from automation and AI
A lot of people believe that AI will take the jobs away from humans.
CORTNEY
It’ll take some of the jobs away from humans for sure. So, I have a piece that I put out in Forbes about a week ago, and I concentrate on the four areas where humans can improve and humans can future-proof themselves. So, the piece is called, “What is Your Non-Machine Premium?” And that’s a term that I came up with for people being able to explain, “How am I better than an AI? How can I provide a better experience?” And so there’s four key things that you can do. The first is emotions. So, conversational skills, therapists will always have jobs. People aren’t turning to ChatGPT for therapy because they think it’s better than a human. They’re turning to it because there are not enough human therapists. Health insurance is too expensive. It’s not an appreciably better experience.
Teachers, doctors, nurses, all of these people who are emotionally invested and caring will be able to be better than an AI. Enhancement, so if you do customer service and you are proactive and you are helpful and you are solving problems before they start, then you are better than an AI. If you are just reading words off a script or if you’re not even answering your phone, why wouldn’t you be replaced by an AI? Experience. So, Waymo is coming. If you’re a cab driver, your option is in the short term, you can protest and freak out and yell and scream, but you’re buying yourself a couple of years at most and you’re also angering your customer base. If they’re inconvenienced, they’re not going to have a ton of sympathy, but you can offer different experiences.
Waymo will become just point A to point B. So, having a human driver will become a luxury good. Having a human driver that provides great playlists or interesting tours or good conversation, these are all things that drivers can do to start preparing to provide an alternative. So, there’s enhancement, there’s experience, there’s emotion, and then there’s ego.
Enhancement, experience, emotion, and ego
Ego?
CORTNEY
Ego. So, ego is not necessarily in a bad way ego. Taylor Swift is the example that I like to use. So, Taylor Swift, you could create an AI version of Taylor Swift right now and it would sound pretty good, but it’s never going to be as big as she is because people like her. They like her story, they like her personality, they like that she was bullied by the cheerleaders, and so were they. She’s the biggest pop star on the planet. So, it’s not just her. It’s like, “Are you a good hang? Are you a pleasant person? Are you somebody you enjoy spending time with?” A lot of these things are things that you can use technology to build your skill set and develop.
Being an emotionally intelligent, emotionally tuned person, you can use AI and chatbots and virtual humans to practice those skills. You can ask ChatGPT for ideas for interesting things to do to set yourself apart or how can I improve? So you can use this technology to essentially future-proof yourself against the technology. Again, I’ve just had a couple of bad experiences with customer service in the last couple of weeks and I’m like, “You guys were toast at the end of this.” I don’t see how people don’t see that.
JANE
Something that I found interesting that you talked about was using a VR or XR for training. Do you think that will make a big change in education in general?
CORTNEY
Absolutely. Absolutely. So, I’ve been working in that space for almost 10 years now.
JANE
What have you been doing for 10 years in that space?
Virtual human avatars to conceptualize behavioral and relational training
Good question. What have I been doing for 10 years? So I started in 2016 and I was working at a VR production company. We did some entertainment and then we started working on training because that’s where the market was. I’ve continued to do that and I’ve done that for clients like Accenture and Lowe’s and Walmart, PwC, Coca-Cola, right? So all of these different places, I’ve worked with them to not only build their training but conceptualize how they do the training. What I’ve really focused on is some hands-on training, some skills training, but a lot of what I focused on is behavioral training and specifically relational training. What that looks like is a project I did for Amazon last year where people had conversations with virtual human avatars that were powered by AI.
So, you had a different conversation every time and you were able to build the conversations yourself. So, it was no code, prompt based, and it was for frontline warehouse managers who tend to be very young. They tend to not have really any formal management training. They were just good employees who showed up on time and got promoted, and now they’re having a very hard time communicating with their team members because they were never trained to do it. When you have hundreds of these warehouse managers, training them at scale and allowing them to do repetitive training is very, very challenging.
Whereas this made it easy for them to build these very custom scenarios, practice at scale, get comfortable with the conversation, try different avenues, try different tactics. Build a person who is very voluble and chatty versus a person who’s just sitting back and being like, “Okay, okay, okay.” Those are two different types of conversations. They’re two different types of people to manage. You could build conversations such that the person spoke some English but not entirely English. Their proficiency was fourth grade level, let’s say, right? That’s the type of custom build you can do that a human really cannot do, and you can do it at scale. You can practice as many times as you needed.
So, we were seeing people do these very custom scenarios and get comfortable, and then what wound up happening was a 92% performance improvement and 92% improvement satisfaction. So, it’s that type of stuff where it’s really moving the needle, it’s building empathy, it’s building practice, it’s building these cognitive and communication skills. So, that I really think is going to be the future of how we learn and train for these skills going forward.
Do avatars have legal rights? Ask Raph Koster and Brittan Heller
Interesting. There was something that you talked about over a year ago, the last time we talked about avatars, and I remember asking you the question about, “Do they have legal rights or do you have legal rights against avatars if you’re in a metaverse context and an avatar does something that hurts you?” I was in Second Life once and this guy was chasing me and some other guy said, “Get up on the fence, Jane, jump, jump.” I felt like I was really in that situation and I got up on the fence. He helped me up, but imagine I hadn’t jumped. Imagine the other guy had hit me. Could I have done anything? I think not.
CORTNEY
So right now, not really. So, there’s an interesting resource that I think I recommended last time and I would recommend again. It’s a guy named Raph Koster, and he is an early game developer, early pioneer in the space. I spoke at an event with him a while ago, and he’s a lovely guy. So, he has a piece about the rights of avatars where he goes into depth with that. So, that’s a fascinating piece that’s available online. I highly recommend checking that out. The second thing with that is I spoke at Stanford Law School two weeks ago with a woman named Brittan Heller, who is a professor there, and she is amazing. Oh, my God. She’s incredible. She has the coolest background, and so she works a lot on that.
So, there’s not really at this point settled case law with regards to embodied avatars just because the market has not reached maturity yet. There are laws and regulations, again, varying territory to territory, state to state in the US on digital speech, digital hate speech, what’s called revenge porn, which is someone leaking photos of you without your consent. So, there are certain things already that could probably be expanded or cross-applied to that world. However, we are now getting to this really, really interesting gray area of proving harm. That will be even grayer when half the avatars that you are probably talking to are not even humans. They’re AIs. So, that’s going to get super, super interesting and super muddy very quickly.
I am not an attorney despite my parents’ best try getting me to do that. I took the LSATs once and was like, “Not for me. Thank you.” But yeah, I think that’s going to be really fascinating to extrapolate out what rights do avatars have if an avatar is hurt or killed or maimed and then you get into video games. People are just killing avatar right, left, and center, and that’s part of the game. So, I think that there’s going to be all these interesting issues and legal precedents, and my fear is two things. One, everything will just become overly restrictive in response. That will plant down on free speech. Two, the people making these laws are often not tech-savvy.
If you have ever watched a hearing in the United States Congress or Senate about technology issues, you know exactly what I’m talking about. These are people who are just no idea what’s happening. Oftentimes they’re also not acting in good faith. There’s a famous hearing with the head of TikTok when a senator kept telling him he was part of the Chinese Communist Party and he kept saying, “I’m from Singapore. That’s a different country.” So it goes from just cluelessness to full on racism, but I think that’s what the challenge is.
Now you are starting to see an AI race between the US and China. That’s going to be fascinating because if China wins that AI race, which they very well might, then what does that look like? Their products are good. DeepSeek is good. However, try asking DeepSeek about Tiananmen Square and you will discover its limitations very quickly.
JANE
Yeah, you talked to [inaudible 00:22 :39]. I thought it was really interesting when you said students came up to you and would ask you, “What do you recommend I study in my graduate study?” So you know what you said?
CORTNEY
Yes. I said, “Go to law school and study AI and the metaverse and emerging technologies and send me an invitation to your yacht in 20 years.”
JANE
That was it. That was it. So, I think you put your finger on something really interesting. I bet a lot of those kids listen to you.
How do we upskill? How do we reskill?
Yeah, no, I think getting back to the idea of AI displacing jobs, right? Will it displace some jobs? Sure. Does any new technology displace jobs? Yes. I know people who were like fax machine salespeople and they don’t really have jobs anymore. People who like horse breeders were displaced when cars came to being, right? So there’s always going to be displacement and churn. That’s not the issue. The issue is how do we handle it? How do we upskill? How do we reskill? How do we help people create transferable skills and how do we create a culture of lifelong education and growth?
Because in the US, we’ve done a very bad job at that. You finish whatever your formal education is, and then a lot of people consider themselves just done. They don’t need to learn anymore. I think that’s really dangerous and we need to create systems around, “How do you have a life of continuous learning and growth such that if AI displaces the job that you have, you find something ideally better?”
Why do you say Spatial Race? What is the finish line?
Yeah. Is that a little bit the underlying flow of the title of your book when you talk about the spatial race? Is that right? The choice of the word race, Cortney, is very striking because a race has people who get to the finish line. First of all, there is a finish line. Though I’m not sure that’s relevant to what we’re talking about, but it’s definitely a competition with winners and losers.
CORTNEY
Yeah, exactly. It’s so funny, I gave a talk about a week and a half ago in Los Angeles on this exact topic, and I made a joke about Clayton Christensen and The Innovator’s Dilemma. I said, “That book, that little book has been taught in every business school on Earth for, I don’t know, 30 years.” The case study in the book is disk drives. That’s how old that is. Clayton Christensen, I think, has been dead for five or six years. What’s funny is everyone learns that book, everyone reads that book, everyone talks about that book, and yet the amount of incumbent companies that are not actually practicing what they preach is insane to me.
So, the amount of companies who are like, “Oh, we’re fine. We don’t need to change anything. We’re market leaders right now.” It’s like, yes, right now is the key term here, right? Because you will get disrupted if you do not get out ahead of this. So, the spatial race really is a race to… I don’t think there’s a finish line. There’s a wonderful Nike ad. So, I’m a distance runner. That’s the other thing that I spend most of my time doing is running marathons and ultras. There’s a wonderful Nike ad that I had a poster of for a long time, and the tagline was there is no finish line. When you are a runner, a serious runner, it’s a lifestyle. It’s not like, “Oh, I crossed the finish line at this race and now I’m done.”
It’s a fun moment. You get a medal, but it’s a lifestyle. It’s an endless quest for self-improvement and all of that stuff. So, the spatial race has no finish line, but there will be winners and losers. I think the key is understanding how you participate in and win and keep up with the spatial race because incumbents will continue to get disrupted. So, if you don’t want that to happen, thinking about and embracing and getting out ahead of these new technologies is the way to do that.
Storytelling and building trust
I think perhaps storytelling will help. I know that you’ve used it in some of your VR exercises. Could you talk about that a little bit?
CORTNEY
Yeah. So, at the end of the day, I think the reason people are afraid of these new technologies has something to do with stories that are being told about them. I think AI is a particularly good example of this. So, many heads of big AI companies who tend to be men, who tend to be blustery, they’ll go on TV and they will just say it’s going to change the world and it’s so amazing, but what they’re actually doing is intimidating. They’re not giving concrete examples. They’re not saying it’s going to make your life better in X, Y, Z way. They’re just pontificating. If you look at them and the way they speak and the way they act, they’re not telling good stories to the average person.
So, if I went on TV and I said, “Waymo will displace X number of taxi drivers,” that’s a bad story. People don’t like that. If I went on TV and said, “Waymo has shown to be 90% safer than human-driven cars, and a Waymo has never been credibly accused or convicted of sexually assaulting someone,” that’s a different story.
JANE
Very different.
CORTNEY
What story are we telling? All three things are true. Waymo will displace jobs. Waymo is also much safer and especially much safer for women. So, it’s like where do we focus the narrative? Where do we focus the story? I’m not downplaying job loss and I do think we need to make sure people are upskilled, reskilled, and able to withstand that impact, but there’s also a lot of upside. I think, again, a lot of these mostly men are just building this case where it’s like, “This is amazing,” but they don’t explain why.
So, I focused a lot on how AI can help women in particular, and women, especially women who are parents, who have children, do a lot of domestic labor and they have to keep track of everything and pediatrician appointments and who can eat what and it’s grandma’s birthday next week, let’s send her flowers. All of that could be done by an AI very easily. So, what does that do for women? It allows you to think about your work, hang out with your family, go for a walk, watch a movie. You can outsource so much of the labor that I think a lot of people don’t want to be doing and have it done by AI. Now, what’s fascinating to me, I see a lot of companies doing this one particular thing. I have a friend who is a project manager at a big insurance company.
So, her big insurance company, every month they have an AI lunch and learn. So, they will bring in some experts and professors, some speakers and whatever, and they will spend an hour eating a sandwich and talking about how AI is the greatest thing since sliced bread and it’s wonderful and blah, blah, blah. So, she’ll sit through those. She’s like, “Great.” So she sat down and she thought, “Okay, how can I use AI to automate my entire workflow?”
So she trained an AI to write a lot of her emails and her reports and to do a lot of her work that she didn’t really want to do, the more rote work that is part of her job. So, at some point, she told her boss about this and she said, “Hey, look, I’ve gone to all these AI lunch and learns. Here’s the 10 things that I’ve done. I think we’re really embracing AI and it’s helped my workflow.” What do you think her boss did?
Can AI threaten your job? Or your boss’s?
He was upset.
CORTNEY
He threatened to fire her.
JANE
Really?
CORTNEY
Yup.
JANE
If AI can do that, then we don’t need you.
CORTNEY
Basically, that was his reaction. Also, it threatened him, right? Why would they need him? Why would they need anyone? And she explained like, no, you still need me to do these 10 other things. But I think that’s where people are really getting stuck, and that’s why you see this number that’s been floating around. 95% of AI projects are “failing”. I talk about this in the book, I’m hesitant to use the term fail because all of the big failures of Web 1.0 are now big successful companies that everyone uses. But the reason I think a lot of these projects are failing is A, they’re not planned well. B, people don’t understand the capabilities and C, despite talking a big game, people are still very critical of and afraid of AI.
So, people will say, “Oh, so-and-so used AI to write an email to me.” That angers them, but they can’t quite explain why. I’ve done this a lot. I use ChatGPT to write things and write emails, especially if it’s an email that’s not emotional or heartfelt. I’m just like, “Here’s an update on such-and-such,” right? I’ll use AI to negotiate with people when I book speaking engagements. I think that people are still nervous about it. You see this on LinkedIn. There’s a lot of these LinkedIn posts that are very clearly written by ChatGPT. ChatGPT has its own cadence, and it’s almost its own way of-
JANE
Yeah, you can tell.
CORTNEY
Yeah, you can tell, right? And it’s not something I particularly enjoy, but that’s fine. But people get very angry about it, and it’s a LinkedIn post. It’s not the Magna Carta. Who cares? Here’s five things I learned about my divorce related to B2B SaaS sales, but people still get weird about it. So, I think that’s where we’re at right now. This really critical inflection point of we haven’t fully accepted this technology.
JANE
Why do people react like that? They’re afraid of losing something they have themselves like a job or an image. I’m just trying to think because I think you’re right. A lot of people are afraid of it and want to be able to control it in some way and don’t know how. Is it just lack of awareness?
Two kinds of AI slop
So some of it is a lack of understanding about how it’ll impact their work. So, they’re afraid of job loss. If an AI can do chunks of your job, then why would they need to hire you? I think there’s that. I think we are exposed to so much stuff that is fake, right? There is so much AI slop that I think it’s a natural reaction to push back against that. It’s a natural reaction to fear that because there’s a lot of AI slop out there that is functionally harmless. And then there’s AI slop out there that is really bad. It’s deep fakes, it’s propaganda. I think we have, at this point, a very low sense of trust as a society. It’s very hard for me to trust a lot of things unless they’re from very reputable sources.
So, if someone sends me a New York Times article, a New Yorker article, an Economist article, I generally trust it because that’s a high value source with fact checking and protections in place. But anything else that someone sends me, there’s always that skepticism. So, I think there’s a sense of low trust, skepticism, polarization, and people should be afraid about this. If anyone watching this is ever in Boston, the MIT Museum, it has an exhibit on deep fakes that will scare you to death, because I think I’m pretty good at detecting deep fakes. There are certainly markers of deep fakes out there, but wow did this thing shock me.
JANE
Why?
CORTNEY
Because the deep fakes were really good, and I was shocked that I couldn’t figure out what was and was not a deep fake. That’s scary, right? Because again, if somebody sends me president such and such gave a speech on so-and-so and they sent me a clip of it, how do I know that’s real? How do I know anything is real? So I understand people’s fear and frustration around that. I do think as we build AI, we need to build in guardrails and responsible AI to make sure that’s not happening.
Guardrails for data privacy and ownership
I wonder how guardrails can be built. You use that word quite a lot in your book, and I was wondering, in fact, I made a mental note to ask you, for example, what do you mean by guardrails? What can be done about what you’re talking about now, deep fakes?
CORTNEY
So the first is people owning their own faces and giving consent to use those faces and those voices. So, I just got Sora a couple weeks ago. It’s ChatGPT’s new video app. I uploaded my face and I uploaded my voice, and then I said it such that only people I knew could use my voice and likeness. So, my husband and I did a Sora together and a friend of mine and I did a Sora together. That’s just fun and we’re goofing off and it’s silly, but you’ve seen now organizations who have uploaded voice and likeness to Sora pulling back because like Martin Luther King’s Foundation, whoever owns his name and likeness uploaded it thinking, “Oh, this will be fun and inspiring.” Within 24 hours, it was just wall to wall racism. So, they pulled it back.
So, I think there has to be really strict guidelines around who owns your face, who owns your voice, what do you consent to, what do you not consent to, how can you manage that? Then the next guardrail is data ownership, and this is where things get really tricky for me. So, I have a friend who’s starting this company, and when he told me about the company, I said, “I don’t know how I feel about this.” It’s been five months, and I think about it often. I was like, “I still don’t know how I feel about this.” So his company allows you to monetize your data and essentially sell it to AI systems for training. So, on one hand, that’s good. People should be compensated for their data. That’s positive, that’s great.
That’s why we have copyright. That’s why we have laws around ownership, but what’s the incentive structure there? So for me, it’s like, okay, yeah, I’ll sell some articles that I wrote and maybe I’ll get a little bit extra cash and yeah, it’ll be fun. I’ll go buy a new dress or something, but I don’t really need it, right? It’s not the difference between feeding myself. It’s more just like, “Oh, fun money. I’ll take my friend to dinner.” But there are a lot of people who would sell their data just to pay the rent or pay the bills. So, what does that do for a data set, and then what does that do for your privacy? So is the expectation people with less money will have less privacy?
What is data? What is root data and who owns it?
They’re going to be incentivized to sell their data. Now the alternative unfortunately is well, ChatGPT just takes all your data without even compensating you. So, it’s a really tricky thing. All these months later, I’m like, “It’s good and bad and complicated and I don’t know.” Then there’s the question of what is data? So this is where things get really weird and meta and crazy. I was at a World Economic Forum AI Governance Summit meeting in June. So, let’s say an LLM scrapes an article that I wrote. Fine, but then 10 people prompt the LLM and it takes part of the article I wrote and part of this and part of that and it creates a whole new article. Well, who owns that article? Who owns the 10 things that grow from that and the 10 things that from that?
It gets exponential very quickly. So, what you’re going to see is how do you even get to the root data and who owns the root data? And these things are black boxes, right? They’re very complicated. So, the idea of like, yeah, sure, ChatGPT is being sued by the New York Times. They could probably do a deal with the Times. They did a deal with Time Magazine. There are places they could do deals with. Where it gets very complicated is the issue of who owns a Reddit post, who owns a blog post on LiveJournal? I don’t know if LiveJournal is still around. A Tumblr post, who owns whatever and then how do you see what has been trained on what? So I think there’s interesting questions about data cleanliness.
There were famous examples of Google Gemini saying, you can eat two rocks a day, and that’s obviously very silly. So, some of this stuff is just silly, but some of it could get very scary. If people are using it for medical issues or for personal issues or for therapy, then it starts to get very scary. What we don’t want is to just shut it down immediately and say no. If someone comes to you and says, “I’m feeling X, Y, Z feeling,” you don’t want ChatGPT to just turn off. But then how do you build such that people get the help they need, the resources they need while not also stifling free speech?
LLMs are not neutral
You spoke there about LLMs, large language models, and I read a really interesting article. The gist of it was that the LLMs around the world today are unbalanced towards the white, the north, the well-off, the people who generate content, and then the LLMs suck up. Therefore, when LLMs are used to provide answers, that prediction of what the next word is and so on and so on, there’s a whole dimension of our world, of our society globally speaking that is not represented or not represented in a valid way.
CORTNEY
Oh yeah.
JANE
To me, that’s a really big problem.
CORTNEY
And people have been studying discrimination related to LLMs for years now. So, there have been case studies where facial rec doesn’t work on people of color like it works on white people because there’s less data. I had a really interesting experience last year where I was working on these virtual humans. I met somebody who worked for the City of New York, and they have all these interesting programs and they said, “Oh, we’d love to try this and pilot it with certain groups of recent immigrants and refugees, have them go through this process because we don’t have enough human translators.” So this is the type of thing where we could use this to supplement our human translation work. I said, “That sounds incredibly cool, incredibly amazing, incredibly useful and rewarding. I’d love to work on that.” So they said, “Okay.”
So there’s a language called Quichua that is spoken in Ecuador. I mean, people speak Spanish and they also speak Quichua. A lot of people who’ve come to New York recently primarily speak Quichua, and it’s not a widely enough spoken language that there’s a ton of translators who are bilingual who can speak both and do services in New York City. So, I thought, “Oh, we can definitely solve this,” but we can’t because there’s not enough content on the internet. Now, a solution to that is coming soon, which is these real-time translation apps, but still there’s just not enough data in that language and in many other languages to make it worthwhile to develop, to have a big enough audience, and to have a big enough data set to pull from.
The idea of disappearing languages in the age of AIs is fascinating. I would love to read a book on that, but yeah, you’re right. The training sets are imperfect. They are mostly from the global north, mostly in English or certain other languages that are more widely spoken. So, yeah, looking at where are we getting that data from, or a lot of the medical data is based on tests on white men. How does that relate to women? That’s not a new problem, but I think people’s level of trust in AI being the answer is we have to look at how do we catch up some of that data? How do we balance these LLMs to at least address that?
We need to learn critical human skills and problem solving
Yeah, there are a lot of things to be done. I have some, what I call, quick questions I’d like to ask you, and these are what I call the rapid fire question and answer. So, the first one I have is what piece of advice would you give to the next generation, the kids, the really young kids today, the five, six, seven-year-olds who are [inaudible 00:40 :09] in 15, 20 years from now?
CORTNEY
I would say work on your critical human skills and be flexible because the idea that what you want to be when you grow up at any given age will be what you actually do is not going to happen for the most part. So, I would say work on these big broad skills, work on your critical human skills, work on understanding technology, and work on problem solving. One thing that I have seen the most recent generation of parents do is they tend to try to solve every problem for their child. They are the ones coming up with the solutions. I understand that impulse and that is laudable, but kids need to solve their own problems. Obviously, there are problems that are too big for kids to solve.
But in terms of when I was a kid, I would go out on my suburban cul-de-sac and there were five or six other kids around my same age. We would just play all day in the summertime and we would make up games. We would have adventures and we would do this and that, but we were also just always negotiating with each other because how do you handle if someone cheats in a game? How do you enforce the rules of the game? How do you decide where you’re going to go, what bike trailer you’re going to ride on, whatever? So we had to solve our own problems essentially. So, I think that is a really valuable skill that kids today don’t have as much is creative problem solving and anticipating problems.
JANE
Is there one thing that you think this generation we’re talking about will be better at than we are?
CORTNEY
Oh yeah. No. So, my niece is 13 and my nephew’s 11 and I spend a lot of time with them. They are digitally savvy and this is second nature to them. Every time I go see them, I take a headset with me. I put the headset on them and they just immediately know how to use it. It’s like I tell them this is the controller. Got it. They’re just immediately using it. So, I think their ability to just embrace new technologies, learn new technologies, and navigate new technologies is really important. I think the challenge is making sure they have those other human skills and those other relational skills and leadership skills.
A major ethical challenge of solving trust gaps
What ethical challenges do we have now or will we have in the fairly near future?
CORTNEY
Oh, my God. Well, that could be another hour, but I think a lot of it again comes down to trust. I think we are at a point where trust is a pretty all-time low in terms of trusting what we see, what we read, what we hear, what we consume. So, I think it’s really solving that trust gap and making sure that there are guardrails and verifications in place. I think in today’s political environment, that’s very tricky because certain political leaders, they make their own deep fakes. So, I think that’s going to be very challenging, but again, what I see, especially in the US, is a lack of trust. So, I think that’s really the central question that we need to answer and the central thing we need to work on.
JANE
Slightly different question I have for you is how should we approach getting the right balance between technology, advancements, and humanity?
Technology in service of humanity
So technology should be in service of humanity. So, the question is always what problem am I solving? And if I can’t say I’m solving a problem, then that’s a good place to decide, okay, I’m going to do something else. So, the example most recently, and this is a ridiculous example, is there is a device called Friend. It looks like a necklace, but it records everything. You can have conversations with it. It’s powered by AI. At this point, I think Friend is just performance art because I think this kid just took all this VC money and just decided to go nuts with it because they’ve spent $1 million dollars on advertising in the subway. The graffiti on these things has been just incredible. That’s why I think it’s an art project.
There was something recently where people dressed up as a Friend. Somebody made a big cardboard disc and then someone else hit it with a stick outside a subway station. It’s really nuts and it’s really entertaining and ridiculous, but it does speak to this fundamental thing of you’re not really solving a problem with that. I think there are AI recording devices that are useful and interesting. I use them on most of my calls for summaries.
I do think it would be interesting to record my day and get feedback on like, “Oh, I had this great idea three hours ago and I forgot it, so now I can act on it.” I understand why that stuff works. I think the Friend thing is just too strange and dark, and so they’re not really solving a problem. So, I think that it’s really thinking about, “How does this improve people’s lived experiences?” That needs to be the central guiding factor.
Cortney’s career development: follow the flow of attraction
How do you see your career advancing, say, in the next 10 years?
CORTNEY
I don’t know, because I think the market is in such a strange place right now. I think that everything is changing so quickly. So, I like to quote this woman, Michelle Lamy, who is a French designer, performance artist, muse, just all around interesting, cool, strange person. She’s married to my favorite clothing designer, Rick Owens. So, Michelle Lamy has a statement that I love and I’ve said it millions of times. She says, “I follow the flow of attraction and try to make a living from it.” I love that. I’m like, “That’s what I do at this point. I follow the flow of attraction. I follow what I’m interested in. I follow these threads that I can go down and ideally I can pay my bills doing that, right?”
I’ve been generally successful, sometimes more than others, but generally, I’ve managed to save some money and do okay and pay my rent. Yeah, I think that’s really going to be the thing going forward, is focusing on what are you interested in? What animates you? What is compelling to you in terms of solving a problem? And then monetizing that.
JANE
You actually answered my last question, which was how do you personally prepare for an uncertain future? You just answered it.
CORTNEY
Bunker, stock up on canned goods. No, I’m not a doomsday prepper, I promise.
JANE
Cortney, do you have any last things that you’d like to say?
CORTNEY
Please buy the book.
JANE
Buy the book. Yeah.
CORTNEY
Spatial Race on Amazon right now. I am available to speak and consult. I do workshops, I do off-sites, I do keynotes. I’m booking for Q1 of 2026. So, if you are interested in that, please reach out to me. My website is cortney-harding.com. My company is friendswithholograms.com. I am pretty findable on LinkedIn and Instagram. Those are the two platforms I on most. I do have a pretty SEO friendly name. So, I am pretty easy to find. But definitely please reach out if this is interesting to you.
JANE
I’ll put out all that information on your page. Cortney, thank you so much for your time. It’s been really interesting.
CORTNEY
No, thank you so much for having me again. It’s been a lot of fun.

