Setting the Scene
I think robots will be integrated into families. Your grandfather would have a robot girlfriend slash a home keeper, slash nurse. People would develop strong relationships with these different types of technology and they would become integrated into a society in a way that we probably find it very difficult to understand. Now, imagine if someone brings their companion robot with them on the flight, and that person’s going to expect that companion robot to be treated as anybody else would be treated right.
JANE
Lionel Robert is a professor of information and robotics at the University of Michigan. He’s also director of their autonomous vehicle research collaboration. We discussed the emotional relationships between humans and robotics and technology in general today and how it will change over the next 10 years. I was really interested in seeing what your title is. It’s a long title, and I was struck by the fact that you have information and robotics together sort of equal weight to each of those two words. People tend to want to focus on information or robotics, the machines, an Amazon warehouse and all that stuff.
The intersection of humans and robots
Yeah, school information is very much human-centered computing from the start. And so basically I do human-robot interaction, and so basically it is that equal weighting of humans and robots. So the idea is that I would argue that if you were doing just engineering, you might be concerned with just the robotics part. If you were doing just, let’s say psychology, you might do just the people, but the information, school information sort of focuses on the intersection of the human and technology, and in my case, humans and robots.
JANE
Well, I like that. And then the second part of your title is amazing. Can you say it?
LIONEL
Yeah, so basically I started the autonomous vehicle group and I needed a name, and so I wanted to get a good name. So it’s the Michigan Autonomous Vehicle Intergroup Research Collaboration, it’s short from MAVRIC.
JANE
Now, MAVRIC, it’s not a coincidence.
LIONEL
Exactly. You’re absolutely right. I think that was definitely a requirement.
JANE
I thought that was really cool. The idea of what MAVRIC really means, there’s an idea of MAVRIC, it’s unconventional and it’s non-conformist.
LIONEL
Bold, daring, unique, individualized, not just following the crowd, different. That was sort of the idea behind it, yeah.
JANE
Yeah. Well, I think that’s great.
LIONEL
Thanks. Yeah.
Lionel’s career
How did you get into the field that you’re working in now?
LIONEL
So when I first started my research career, I would do, I consider myself to be more of a collaboration scholar. And so I was studying how humans were collaborating with other humans through technology. So you see virtual teams, we had a lot of research on teams that work face-to-face, but now teams are working online and they were going across time zones, across cultural zones, across boundaries. And a lot of what we knew that made teamwork important were people having a shared history in the same location, interactional. All that was out the window, when you look at these, we call virtual teams. And so I was studying how technology essentially changed teamwork and how we could better design technology to support teamwork. And that’s how I started off. But over time, it became quite clear that humans were no longer going to be just collaborating through technology, but were beginning to collaborate with technology. And that’s when I shifted more toward human collaboration with robots and other AI technology.
JANE
Collaboration with technology. You talked about the emotional relationships between robots and humans and how that’s a whole area, I think, of exploration, of research. The whole idea that there could be an emotional relationship between robots and humans is quite startling. Can you talk about that a little bit?
LIONEL
Yeah, I think we know now that humans are developing relationships with AI-enabled technology like robots. We see that a lot with ChatGPT, and you could say, in the argument people will say is that that same relationship with a robot can be much more visceral. The idea is that if you have a relationship with a chatbot, you never actually see it. It is never next to you. You can’t theocratically lay with it and take a nap. But I mean, you could do iPhone, but with robots, you can literally have a physical environment and you can also have it any way you want it to look in your physical space and that physical interaction, that could lead to a lot more emotional interactions than just an AL chatbot, for example.
JANE
So are you doing research and experiments with this?
Intimacy between humans and technology
Yeah, so we think that the next frontier, whether we like it or not when it comes to technology, is going to be the development of intimacy between humans and technology. So if I want you to use your autonomous vehicle, I’m going to design it to establish some intimate relationship, which there are different types of intimacy. We’re aware of sexual intimacy, but there’s also emotional intimacy, right?
JANE
Yes.
LIONEL
And you can just imagine, for example, and we look at Siri and other devices like that, they’re very little bit quaint there that they try to be soft on the edges and all that’s designed for a reason to sort of let your guard down to make you feel comfortable. But I think as technology evolves, we’re realizing that basically this intimate relationship is going to become invaluable to getting people to use technology and more importantly, to feel comfortable using the technology.
JANE
And to trust it.
LIONEL
To trust it in a sense of relying on it, depending on it. And all designers, remember, we want people to use the technology and to rely on it in a deep and meaningful way. And that can only come for better or for worse if people develop these relationships with it and these emotional dependence on it.
Emotional dependence on a robot
How do you know if someone has developed an emotional dependence on a robot, for example?
LIONEL
Yeah, we do various research. Some cases we do surveys, some cases we interviewed them. So we actually talk to them and then we talk about how they feel when they use it or how they feel when they don’t use it or how they feel if it was taken away from them. So it’s a particular problem. And there are some researchers that looked at, well, what happens when you develop an emotional bond, let’s say with a robot, then the robot is no longer upgraded or considered to be usable. So imagine if you have a robot that you really develop a relationship with and then the actual manufacturer decides that it’s going to phase it out. For a lot of people, that can be a very traumatic event akin to losing a loved one or to losing a pet.
JANE
Wow, and you have talked to people who feel that way about their robots?
LIONEL
Yes, and robots in technology, people develop strong relationships. Some people are going to say that people cannot develop relationships with technology. So that’s one view, at least not in the way they do with humans. But you’re seeing people who are very, I guess some cases they’re lonely, but some cases you live in a society now where our kids grow up in an echo chamber where everyone listens, everyone agrees with them. And so the next step to the echo chamber is to have a robot or AI that’s to be exactly the way you want it to be. Never disagrees with you unless you wanted to disagree with you, have the same opinion of you, looks like you or looks like you way you want it to look. Sounds like the way you want it to sound. I mean, that is the next step of the echo chamber in ways that we probably haven’t fully thought out yet.
JANE
When you talk about that, are you thinking mainly about children or young people or adults also?
People who cannot integrate into society
I’m thinking about adults. So there’s two big concerns to me. I guess there’s three, I guess. So I would say this. I think about kids. I think about young adults who cannot integrate into society. You see a lot of this. A lot of them basically are in these echo chambers in their basement or they can’t develop a meaningful relationship that they have no peers except for people that are online that they interact with, they can maintain no relationship that we would traditionally think of. And then I’m thinking about people who get older, who get isolated, those people also, there’s a lot of work being done. So first thing to remember is that robotics is probably one of the first technologies that primarily being driven financially due to a market of elderly people. A lot of robotics is some form of assistant technology, whether it be autonomous vehicles, whether it be medical devices, you name it’s, all agreed, it’s all designed to assist you.
And almost certainly the market that’s shooting is the elderly. They are the largest generation. They’re also the wealthiest generation, and they also live a lot longer than they can function independently. And so a lot of technology for robots are being designed for them. That’s one thing to keep in mind. And so they’re also the ones who are likely to get lonely, to get shut in. They’re probably also the ones likely to maybe develop a relationship potentially with some of these devices that may not be healthy in some ways.
JANE
Well, you talked about the young people, not children, but say young pre-teens or teens who spend all their time sort of in the basement looking at a screen and then therefore unable. They don’t know how to build relationships. They’ve had no experience building relationships.
LIONEL
They have bad experiences, building relationships, with peers, with opposite sex, just don’t fit in, don’t feel good about themselves and have negative feedback from some of their peers.
JANE
And I wonder to what extent we as a society have created that or accentuated that by making the digital world so available and something that basically surrounds us.
LIONEL
That’s absolutely correct. I think if you go back and I think about my own life, there are times when basically we all feel ostracized by our classmates. We all feel a little bit like an outsider, just the nature, but we have no choice, but to get out of the house and go play, to go to school. We have no choice but to get back into the whole thing of life, repairing relationships, learning where you fit in, where you don’t fit in, learn how to be independent. Now, I suppose you don’t have to do that. You can just stay home, get online, and the world’s your oyster. And a lot of those coping skills I think are never truly develop. And it becomes easier to rely on and depend on and become emotionally dependent on some artificial thing that you believe won’t let you down, which may not be true, but that you may be inclined to do than it is to go out outside the door and actually take a chance.
JANE
Yes. You talked about older people, which made me think about healthcare in general. I think robotics, from the small amount I know about it, I understand that it’s very, very used today in healthcare.
Robots offering new lives to older people and disabled people
Yeah, exactly. Little by little, they’re using it now. It’s going to get even more. What they hope to do long-term is allow individuals to live independently in their homes with the help of robots, whether it be robots, assisting them, telling them when to take their medicine, helping them get up, or even basically, you can imagine the robot now that’s trying to develop a robot in the home that acts as a go-between, between the homeowner who’s a patient and their medical services. So maybe the medical services people are checking with the robot to see how the patient’s doing.
JANE
Yeah, when we talked earlier, you talked about children with autism and how robots can possibly help with that in some way. How would robots be able to help with that?
LIONEL
So the idea is that basically these robots can help teach them skills, how to interact with people. So you can imagine basically, maybe they engage in behavior with a real human that they would feel rejected by. So if they do something inappropriate, the real human says something really bad and a negative reaction, but maybe with a robot, it could actually correct them, could actually tell them, “Hey, don’t really…” Another five-year-old child isn’t going to say, “Hey, you know what? That hurt my feelings. Why don’t you apologize?” A typical child is just going to react negatively and walk away, and now you’ve broken a friendship and all of a sudden you get this negative reaction, this negative feedback. Whereas that robot basically can actually help coach that individual on the appropriate behavior of how they should behave in that social context. So the person learns without paying the penalty, let’s say.
JANE
That reminds me of a conversation I had with someone else who is doing training of young people using the metaverse. What’s your opinion about that?
LIONEL
It’s interesting. I’ve seen a lot of people are trying to use what they call augmented reality, trying to combine the two. I don’t really know how far you can take that because there is this idea of danger. You’re in the physical world. I think the hybrid model is very difficult, is more or less easier to have them completely in the virtual world. But I think you run the risk because the problem you have there, for better or for worse is so on one hand I could say it could be good because you can have other people. You can have real people in that metaverse interacting with you. The bad news is how would you know it? You wouldn’t know that that was a real person or not. So you still have the same problem of developing relationships with entities that you don’t quite know how much autonomy they have, which I think is always interesting.
JANE
Yeah, it’s hard to imagine.
Making a chatbot of a loved one you lost
No, I think it used to be hard to imagine. I think now it’s pretty easy to imagine because we’re rapidly progressing. So I’ll give you a perfect example. There are companies that will make a chatbot of your dead loved ones. So I suppose your father dies if you give them all the emails from your dad, they can make a chatbot of your father.
JANE
Wow.
LIONEL
Yeah, exactly. And even if you have a young child. So these are things that are actually happening now. They’re not. And the question is, no one asks, is this a good thing or a bad thing? Because this is the fundamental question where, and that example, is it a bad thing not to let go? Sometimes people will say it’s healthy to let go of people who have moved on. And then the other questions for me, let’s suppose someone’s a widow. So a perfect example, they’re elderly, they’re a widower, they’re a spouse who’s died at age of 70. Are you better off developing a chatbot based upon that person’s personality or leaving them by themselves? I mean, that’s the kind of questions I think we have to ask ourselves.
JANE
How are we going to find the answers to that?
LIONEL
You know what? I think we’re going to find out to pain. I think we’re going to see cases where it works well and cases where it doesn’t work well, ideally you would like to do research and get all the answers, right?
JANE
Yeah.
Once technology is out there, it’s hard to establish effective policy.
Reality, we never have all the answers. And then basically things move so fast. The old saying, once the horse is out of the barn, it’s hard to bring it back. Once the technology’s out there, it’s hard now to legislate it and come up with effective policy.
JANE
I presume that there are, from what I’ve read and heard, talking to other people, there are attempts to write legislation to either moderate it, control it, limit it, or assign responsibility, which is another aspect.
LIONEL
So there is, right?
JANE
I can tell from your tone of voice, you’re skeptical.
LIONEL
So here’s the problem. So in years past, I would say, you know what? If I wanted to tell you a great example of that, I would say, “Look to the EU, don’t depend on the U.S.” Or I would say even China for example. Don’t depend on them because basically, the EU was our best hope because if the EU mandates something, a lot of times it becomes easier, even for the companies in the U.S. to fall along. It just becomes cheaper. Now, the problem is the EU has did this, and now they’re in a problem because the number one thing you need to produce good AI is data, human data.
Does regulation slow progress?
And the EU has been really good at protecting issues or privacy. Now, they don’t have a lot of good AI programs. If you look at the top LLM systems, it’s U.S. and China, two countries who have made data widely available for better or for worse. And so now the EU finds itself almost a generation behind because of their really good privacy laws, because they really did a good job of doing the right thing.
JANE
Oh, yeah.
LIONEL
Yeah. And so now they’re stuck behind trying to catch up and trying to figure out how they can mitigate this or go and maintain their privacy protocol and trying to catch up to the U.S. and China. And it’s not clear that they can, at least with time being
JANE
Now, the LLMs are not very neutral, so I understand. Because they depend on where the information was collected from. So you get LLMs created from stuff from North Americans, that’s going to be quite different than what China might have or what Nigeria might have.
LIONEL
Exactly.
JANE
That’s a big problem. Can there be a global LLM?
Balancing the good and the bad
No, I don’t know. I mean, because the problem is one of our problems is how they start and update, but they also evolve to the user. So basically if you had a U.S. LLM, if a European was used it long enough, it would sort of try to evolve to meet that user base. For better or for worse, it’d still be a base set in the U.S. There’ll be a lot of things, a lot of assumptions. The good thing is that they evolve with data, and the bad thing is that they evolve with data. So it’s hard to rein them in. It’s hard to know, to actually fence it in.
JANE
Talking with you, I get a feeling that it’s very difficult to balance the good and the bad. This thing is good, and this thing is maybe not bad is the right word, but this is beneficial and this is dangerous, and we can’t build up the beneficial side without taking risks on the dangerous side.
LIONEL
Exactly. I mean, it is just tension. You want people to use it. So you design to make it friendlier and interactive, and by definition, it sort of developed this relationship, but then you have all the bad stuff of having a relationship with a technology that you haven’t quite figured out yet. It’s hard to get the good stuff without the problematic stuff.
JANE
You’re in a fascinating area of work and research. You have a long future. I don’t know how long you want to work, but you have a long future ahead of you, I think.
LIONEL
Yeah. Well, we’ll see. There’s growing talk in academia that AI is going to replace professors, right? I don’t know. So here’s the problem with AI. People don’t realize a larger scale. These systems are built on past data to what degree the future is similar to the past data, AI works great. To what degree things change and things always change, then they go to shit to some extent. So we always got to keep that in mind. So you imagine you have some evolutionary path as long as we are on some linear trajectory, okay, but the minute things shift, they’re not very useful. It could be a liability.
JANE
I’ve worked a lot with global companies working out their digital work environments, and one thing I learned is that we can’t be working with best practices. Best practices are based on the past. So if you’re going to work out your strategy on best practices, you’re not advancing to the future. What you need to be doing is try to find the next practices.
LIONEL
Yes, exactly.
Automation and shaping urban spaces
I wanted to ask you, I saw something that you have worked on. It has to do with urban spaces. I found that fascinating. You talked about shaping future cities, automation and dehumanization of urban spaces. I think it was an academic session that you and some other professors had a few years back and you talked about what you call a smart city paradigm and how automation can be used and some emerging technologies can be used to make cities more balanced for people, more equal opportunities. They can deal with a lot of problems that we have today in cities. Have you done more work on that?
LIONEL
Some. I mean, basically the problem we have is that they could, they could. So basically you have this two-pronged problem. So for example, we talk about autonomous buses. Imagine autonomous buses can make the transportation more accessible in one sense. They also make it less accessible if you’re, let’s say blind or disabled, where you need someone who drives the vehicle to help you get on. So you have this balance. But the idea behind these smart systems is that basically you could be more efficient, you can be more effective, you can provide services. So for example, imagine if you had autonomous vehicles, but they were routed at the level of the city. So there would be no need for stop signs, maybe. Everything would be coordinated, everything would be safe, a lot more safer than we have now with autonomous vehicles.
You have a case where you can provide a lot more services. People are looking at, imagine if with a smart city, someone who’s elderly could literally go from their home without any assistance from a person to their doctor’s visit. And you imagine a series of autonomous vehicles and robots in that process come into their home picking them up, and so they can live an independent life. And in a smart city, possibly in the future, you can have a city where we not, we have reduced greenhouse gases, but a lot of people have independence in a way that they couldn’t have now.
JANE
Would a smart city still have areas of the city dedicated to poor people and people without jobs? I mean, that lack of equality, how could that be dealt with?
LIONEL
So I think a smart city would reduce some of the lack of access that some poor neighborhoods have. That’s what we look at. So if you live in a poor neighborhood, there are a lot of things that you don’t have access to. And so the idea with smart cities that you could reduce those disadvantages, it wouldn’t make every element income equally. The idea behind is that if you lived in a poor neighborhood, you would still have access to many of the same opportunities and resources that people had in a more moderate income place.
JANE
Well, I have been in France for decades, and I remember how much, what Paris was like a long time ago and how much it’s changed. And there’s certain areas in Paris now where I wouldn’t want to go. And there are other areas in Paris where I didn’t want to go and now are fine. So there’ve been a lot of changes in, I mean, Paris is an example of a large city that’s trying to change.
Security robots: pros and cons
But let me show you how difficult it is. So one of the things a smart city might have, a smart city might have security robots in bad neighborhoods to ensure safety of residents.
JANE
That’s interesting, yeah.
LIONEL
On the other hand, another way to look at it, and people have criticized this, is that maybe those robots are just one more way that the authorities are oppressing the poor people. So imagine if you were to make every place safe, if you had more cops to be on every street corner, you could do that, right? Well, how about security robots? You can imagine those places that you’ve mentioned that were not safe. If those robots were down patrolling, remember, they don’t necessarily need weapons. All they have to have is a camera and ability to report a crime. All of a sudden that changes the dynamics.
You can imagine if you’re in a dark alley and you’re walking down the street and there’s a security robot next to you, you would feel safe because there was someone trying to do something to you. You would immediately report it. And so a lot of people think, “Okay, here’s a great example of how you can have a smart city element, a grid to help people who are living high crime areas.” At the same time, many people in those areas feel they’re being surveillance. They feel threatened. So this is the tension, the potential versus the problems of a lot of this AI technology.
JANE
And it has a lot to do with leadership in the sense the leaders from maybe a government viewpoint, in the case of a city, people either trust the heads of government to do things well, or for example, to make sure that people are protected at night with the robot. You talk about other people who are against the government might think that the government’s using them to, I don’t know how we could get a balance between those two.
LIONEL
Well, we don’t have to. I mean, that is the ultimate problem where basically we have, we talked about it earlier, hard to take advantage of having security robots, for example, that could be low cost and they could fight crime. At the same sense they do survey, they do look, they do record. They are somewhat intrusive, although that in the U.S. and the U.S. law, anything that happens in the public is free domain. So you should not expect any privacy in the public. And so on one hand people will say, “These are the solutions to our problems. These are the solutions that we’ve been waiting for our problems.” On the other hand, when you try to implement those solutions, you get a lot of pushback because people see those things differently. And that’s just an example of a larger problem with a lot of this AI enabled technology that we’re deploying.
JANE
How do you see the future of using these kind of devices in urban areas? Do you think it’s going to go in a good direction, all the automation and the robots, and what’s your personal opinion?
LIONEL
Yeah, I think my personal opinion is that it would do both. There’d be some cases that would be clearly beneficial, it would be good. And some cases where it’ll be problematic. Well, I would say this, it depends on who you are. I mean, I say this not to be, if someone, let’s suppose someone is an undocumented individual and they’re on the street. Well, these security cameras can pick that up. Well, that won’t be a good thing for you and your family. At the same time, if someone else, let’s suppose someone is a criminal that’s wanting on the streets for a crime, well, then that could be a good thing for people to be safe. So it’s like is it a good thing or bad thing? I can say this. I think overall it has potentially be a good thing. I think individually it’s going to be from your perspective, I think we’re going to have to, for example, let me give you an example of what I just said.
A better system might say, and we kind of had this in America from a policy standpoint, was that if we see someone who is a murderer, we will contact the authorities. But if someone’s undocumented, we won’t report that. I think people are willing to work out some trade-off between some level of intrusion for a high enough level of value. So they might think, “Okay, I’m willing to have some intrusion for you to take someone who, let’s say is a serial killer off the street, but I’m not willing to forgo that intrusion for you to just apprehend an undocumented person who’s minding their own business,” for example.
JANE
Yeah, well, I don’t know if America can reach that point of balance or not.
Determine what trade-offs we as a society accept
Yeah, no, we can do it. We’ve done it in the past. For a long time, we’ve had laws like that on the books where people, if you were undocumented and you didn’t commit a crime, then you could come and go basically like California. So I mean, I think that’s the key thing is that we as a society, we have to determine what that trade-off is going to be. I think we’re better off, for better or for worse to have somewhat of a bottom-up approach to this where people are involved and they actually make a decision.
So let me give you an example. Let’s suppose in a city someone says, “We don’t want these subverted systems to identify people who are undocumented.” So they vote and in that city that it doesn’t do that, but the next city, five miles down the road, they vote otherwise and they do it. I mean, that’s the benefit of technology that you can basically, you could decentralize the way this technology is used pretty easily. And then if people are undocumented or against that policy, they don’t go to that city.
JANE
Yes, I do a lot of thinking about centralization and decentralization, and I think overall decentralization is better than centralization. It depends on a lot of things. But the example you just gave now is a question of decentralization, giving people the ability to make decisions about the way they want their area to be.
LIONEL
I think that’s the key. I think we can’t just sit back and be passive and see what happens.
JANE
What should people do? You say we can’t sit back and be passive. So could you give some practical tips?
We need to have open public discussions and groups that lobby for citizens
So one of the things that we don’t do is we have open public discussions about unemployment, healthcare, but we really don’t have a public discussion about the things that cause unemployment and healthcare issues. Have we in America had a debate about the use of autonomous vehicles? No, they just popped on the street. No one says anything. I think we could basically have some open discussions about how we want this technology to be used, so then we can guide it and shape it and have a say in how it’s going to be designed and deployed. I think we’re always behind to some extent, and I get it.
So there’s a race. Everyone thinks there’s a race. So every country wants to be first. So they think regulation restricts innovation. And people will look to Europe and say, the old saying is that America innovates and Europe regulates. People will look and say, “Well, do you want to be over there like them?” But on the other side of it is that we do make a lot of mistakes. We move fast and we break a lot of things, but part of the things we break are our societies and our communities. And I don’t think we fully captured the cost of some of those breaks.
JANE
Can we ever? We would need an inspirational leader, wouldn’t we? Someone to voice what you just said, but on a national level.
LIONEL
Yeah, I think so. I think we need a national policy. I think we need a national conversation. You’re absolutely right. We need an issue. We need the, same way there are people right now, think about it, who lobbies for issues of technology? It’s pretty much the corporations or at least healthcare there are corporations and then there are the patient advocates. There’s not a strong lobbying group for citizens and technology or how use of technology. There’s no other side to the discussion for the most part, yet.
JANE
We need that.
LIONEL
Yes.
JANE
I never thought of that before. Groups that lobby for this citizen, how would you describe that?
LIONEL
So I think you would basically think about, for example, you could say, oh, I think you’ve seen some of this. I think in some communities they have tried to ban Uber, they ban Uber. You can imagine communities doing the same thing for autonomous taxi vehicles that they could say, “Look, in our city, we’re not going to allow that.” You can say. So the argument is it’s not safe, but I think the bigger argument is that it is going to take jobs. And we decided in this city that whatever gains we get from having autonomous vehicles or rides, we would prefer the gain from having people drive people around economically.
JANE
And so cities would’ve to make that decision at a city level.
LIONEL
It could be city, it could be state. I mean, I think it all depends on how much flexibility you’re allowed. There are some city ordinances, there are some state ordinances. I know here in Michigan, they’re going to build a lane on the interstate designed just for autonomous vehicles.
JANE
They are?
LIONEL
Yeah.
JANE
Now, is that a good idea in your opinion?
LIONEL
It’s a good idea for safety. Basically what they want to do is they want to establish a safe use of autonomous vehicles from point A to point B, and the interstate will be the best way to do it. And the safest way to do it is to separate the traffic. From a safety standpoint, now, if you drive people, for example, to and from the airport, that’s directly your job going. So once again, have we decided that this is a good thing? I don’t know. We decided, someone decided there is a competition even within the America, for every state to be the big hub of AI and autonomous technology. And so, one state wants to one up the other state, who wants to one up the other state. So they’re bending over backwards to accommodate these tech companies as opposed to trying to determine what’s appropriate and what’s not appropriate.
JANE
I think that in America, it’s often the very big corporations that have a heavy, heavy influence on how things happen.
LIONEL
And we do get, I mean, to be fair, we get a lot of innovation. On one hand people will say, this is why innovation. So on the other hand, there are cases where basically there are whole cities in Michigan that haven’t recovered from the decline of the automobile industry, and what’s the cost of that? We haven’t really figured that out yet.
Robots integrated into families over the next 10 years
I had a last question for you. In fact, two last questions. One is how do you see the future, say just 10 years or 15 years from now? What do you think it will be like from a viewpoint of the different topics that we’re talking about now, sort of robotics, AI and so on?
LIONEL
I think robots will be integrated into families. I think your grandfather would have a robot girlfriend/home keeper/nurse. I think people would develop strong relationships with these different types of technology and they will become integrated into our society in a way that we probably will find it very difficult to understand now and in ways that I think may change the laws, what we consider it to be human or to have an agency. So of course, we know corporations are people too. So the idea of artificial entities having personhood is not unusual in America. Imagine if someone brings their companion robot with them on the flight and that person’s going to expect that companion robot to be treated as anybody else would be treated.
So I think you’re going to see this line cross where basically people begin to see robots the way they see pets as some extension of their family in ways that we used to think about in terms of other people. Even if the people would say, I know people will say, “Oh, my nanny is a part of my family, or my housekeeper is part of the family.” I think you’re going to see a lot of that in every American society. And so that’s one thing you’re going to see. Second thing you’re going to see is I think you’re going to see a lot of First World countries like the U.S. leverage autonomous automation to compete on a global scale against cheap labor and Third World countries.
And so people don’t realize when AI and robots take off, the biggest impact is really going to be these Third World countries that rely on cheap, repetitive labor, right?
JANE
Yes.
Developing nations will slide back, not forward as jobs are done by robots
And so even though these jobs will, quote-unquote, “Come back to America,” there won’t be as many of them mean. So the same plant that made X amount of widgets that 1,000 employees might have 250 employees now. So they’ll come back to the U.S. and those countries in the Third World going to suffer more than the U.S. is because basically they’re not going to have a lot of fallback. And if you’re not careful, you’re going to see a lot of developing nations slide back, not forward.
JANE
Wow.
The human touch will cost more. What do you want to keep uniquely human?
Yeah, and I think you’re going to see a lot of advances with AI in terms of work, in terms of improving quality of life like healthcare. Healthcare could be cheaper. A lot of things that were restrained because of cost that was due to human labor will go down, become accessible. But the most important thing you’re going to see, well, the most important thing I say, one thing you’re going to see is you’re going to see that the human touch is going to cost. So if you go to a hotel and you want to talk to a real human person to check in, that’s going to cost you. If you want to go to a doctor and you want to see a real human doctor that’s going to cost you. So the human touch is only going to be available for the elite rich, for everyone else, it’s going to be that touchscreen and that AI bot and that robot.
JANE
Wow, that’s an interesting view of the, I would call the near future 10, 15 years from now. Okay. Anything that you’d like to add?
LIONEL
I would say that if I had my way, I would fund money nationally, national level, even global level to figure out how much this is good and how much of it’s bad. Before everyone gets in this space race to do this. “Okay, maybe we can spend some money to figure out when is this good, when is this bad? What should we be avoiding? Should there be things that we should never want to use AI and robotics for no matter what?” I mean, perfect example. If someone has cancer, we would never want to have an AI inform that person of cancer. We’ll always want that person to come in and talk to someone face-to-face, right?
JANE
Yes.
LIONEL
So I don’t think we’ve done the due diligence to figure out, going forward, where are things that we want to keep uniquely human and that we should protect.
JANE
I think what you’ve just said is really important. In your position at the university, have you considered triggering off analysis of that sort or is that too big a scale for one university?
LIONEL
Yeah, maybe. I’ve been thinking about it in certain contexts, whether it be relationships, whether it be work, trying to figure out. But yeah, you’re right. I think it’s a big scale. I think it has to be at a national level, the way you do healthcare, the way you do a lot of these things, I think people have to look at it at a high-level and actually begin to figure out what it is that we want to want to think about. What do we want to really automate or hand over? And what do we don’t want to? If we allow the capability of technology to set the guardrail, we’re always going to be late. You hear it all the time. People say, “Oh, the cars are never going to be able to drive.” Don’t worry about that. And so we depend on, we say, “Well, the technology will never happen.” If that’s our policy of the guardrail, then we screwed. Because basically eventually technology will always catch up. And then when it catches up, we’re too late. We’re behind.
JANE
And you think we have enough information now generally, to be able to look at these different issues and decide what kind of guardrails make sense?
LIONEL
We can begin to talk about them and begin to talk about what areas of guardrails that we want to begin to look into. I think we got enough, at least begin the conversation, start the process. I think that’s the first thing we have to do, to have that conversation to say, “Hey, what is it that we don’t want to automate? What are things that we want to keep to ourselves? Where are things that are uniquely human?” We can start that conversation. And then you’d imagine that conversation would be centralized depending upon areas and spaces. Depends upon communities and domains of work and activity.
JANE
Wow. Interesting. You’ve given me a lot to think about.
LIONEL
No, I’m happy. Thank you. You too. I think it’s great. I think it’s fascinating. I just think that we don’t, we’re at, I wouldn’t say a tipping point because everyone says tipping point, but I think we’ve depended on the technology not being good enough to save us. And I think we can’t depend on that anymore.
SUBSCRIBE TO IMAGINIZE WORLD ON YOUTUBE
We talk with forward thinkers, scifi visionaries and pioneering organizations about people and society, AI and humans, the earth and survival. Read more Imaginize.World
