Click here to go back to podcast page.
Below is the full transcript.
Introduction
When we make a decision about AI, when a producer or a user or a company or an individual makes a decision about AI, how are they thinking about it? Where does that thinking come from? There are four ways of looking at it, four perspectives, and each one has a logical basis.
JANE 00:27
I’m Jane McConnell and welcome to Imaginize World, where we talk with forward thinkers, pioneering organizations, and writers of speculative fiction. We explore emerging trends, technologies, world-changing ideas, and above all, share our journeys, challenges, and successes.
00:46
Art Kleiner is an author, advisor, and forward thinker. I would say. In fact, he’s a very forward thinker. He has worked for years and communicated for years about learning organizations, corporate management, corporate strategy, and decision making. The AI Dilemma: Seven Principles for Responsible Technology is his most recent book, which he co-authored with Juliette Powell. I love the title of it because it gives us a dilemma, the AI dilemma, and then proposes principles for us, not solutions, not answers, but principles, fundamental things that we need to understand and think about in order to make decisions.
01:29
This is a very important conversation. Art raises key issues, key questions that we need to think about. The first one is who has the power to make decisions about AI projects. They talk about four power logics and they often collide. They are the engineers, the business people, the government, and social justice. Decision makers on AI projects need to think about all four perspectives together, keeping them in mind simultaneously as they define a project.
01:58
Second point is how to help people understand the logic. It has to be explainable and the explainability needs to come from different angles for technical people, executives, auditors, lawyers, regulators, customers, and general public, and these explanations are going to be very different. They involve the logic of the algorithm, the business reasons, and especially the assumptions on which the decisions were made during the process.
02:23
A third point that we’re all thinking about a little bit, I imagine, is who owns the data. How can a person control data about themselves that’s available online or available in systems? Related to this is the fourth point, should this data, can this data be monetized? If so, by whom and how? Are there ways of monetizing data that are useful for society?
02:46
A fifth point is how can these AI solutions and projects be regulated. How can data be regulated? Should it be regulated at a global level, locally or through self-regulation, counting on organizations to regulate themselves? Should we have universal rules or specific rules, national rules? Now, let’s talk with ART about his ideas.
Gig Mindsetters and fear
So hi, Art.
ART
Hey.
JANE
Here we are again, face-to-face. It makes a change, doesn’t it, from our email correspondence that we’ve had over the years?
ART
There you go.
JANE 03:24
I wanted to thank you, Art. I positioned you in my book, The Gig Mindset Advantage, as one of the pioneering gigmindsetsetters in the world. You actually read The Gig Mindset Advantage and you gave me a great blurb for it, and what I liked about your blurb was you said something that – I got a lot of great compliments and all that stuff– but you said something no one else said, and that is you said, “The Gig Mindset is not the problem. There is a gig mindset around us. It’s fear that is blocking a lot of people.” I thought that was a very strong statement I have kept in mind for a long time. I think that you’re doing a lot of work in that area indirectly.
ART 04:04
Well, that might be a good launching pad to talk about artificial intelligence. What I meant by fear is that when we work for a large company, I worked for PWC for four or five years and for other large consulting firms before it, and we get used to the guardrails and the systems and the structures that are there, and they support us in ways that we’re not even completely aware of, in the same way that a good marriage supports people and they feel comfortable and it’s hard to leave. When you do leave and you’re in the gig mindset situation, you’re only as secure as the next round of gigs that you’ve already got lined up. There’s no certainty.
04:55
With the advent of automated algorithmic systems, we working people, knowledge workers, people living in relatively wealthy countries, but a significant number of us are now confronted with systems that are not designed for us, but that are designing their own guardrails and structures and systems, and we have to pay attention to that.
Dilemmas are hard choices
I assume that one of the underlying motivations for writing your book that you wrote, you co-authored with your partner Juliet Powell, one of your underlying motivations was to make people aware of this. I think the title that you chose, The AI Dilemma is a very powerful title. I looked up the word dilemma to really get a hold on what it is. I know you have to make a choice, but the definition actually includes the notion of a difficult choice to make. It’s a troublesome thing. It’s not just a choice. It’s a situation where it’s not clear at all what’s right and what’s wrong, what’s good and what’s bad.
06:03
So you’ve got this idea of a dilemma in your title, and then you go on to reassure people by saying the exact words that you use are “seven principles for responsible technology”. So it’s like we’ve got this problem, these principles are going to help you. Was that your intention with the title?
ART 06:23
That was exactly our intention. It goes back to … we did not pull the book out of the air. It was based on Juliette’s work and she has a large network of people in the technology industry and big tech and entrepreneurial tech. I have a longstanding background writing about and thinking about automated systems and computers and business and management. She was at Columbia researching what became a dissertation there that had to do with whether companies could be trusted to regulate themselves around artificial intelligence. The short answer was no. The track record is not great. The long answer is, however, there are ways in which we have to.
07:22
In the right hands, AI does amazing, miraculous, powerful things. It is not going to be postponed, let alone eliminated. It’s here to stay and for good reason. We’re already dependent on things that AI delivers, we humanity. In the wrong hands, it can be devastating to individuals and potentially to large numbers of people. Often the right hands and the wrong hands belong to the same group of people, same group of decision makers. So it becomes an organizational learning dilemma.
08:02
We talk about the two horns of the dilemma. Life is intolerable in one way and it’s intolerable in another way, and the dilemma is turning out to be do we regulate it quickly and forcefully or do we regulate it with hands-off and let companies develop their own path? Neither is fully acceptable. If we’re talking about what we want, what we want is for the technology to work out seamlessly without much effort. In a way, computer technology has always had that goal implicitly, at least for the people who use it.
08:49
You won’t have to use a typewriter, you won’t have to use carbon paper, you won’t have to photocopy things. It’ll move seamlessly from one thing to another. It’ll meet you more than halfway in terms of matching to your habits, and it will get you instantly to the product you want and it’ll arrive at your door or to the information you want, and all you have to do is click on this little box that says, “Okay,” for cookies or it’ll take your data and give you exactly the ad or the material, the news report that you’re looking for until you’ve got 500 of them and there’s nothing else that you can, and they’re crowding out all of [inaudible 00:09:35]
JANE 09:36
It’s interesting because people get the idea of control. You talk about this in the book, how people want to control things, and so they get the feeling like you just described, that you can get what you need quickly by using whatever on your computer. What people don’t understand is what’s going on underneath or behind all that.
ART 09:57
This was really hard for me, JANE. This was probably the most difficult aspect of writing the book with Juliette. I have to give Juliette a lot of credit. I’ll just start by saying she saw this issue. She was hearing people in the industry say, “I’m being asked to do things that are really unconscionable.” In one case, somebody was scouring social media using writing programs that would raise issues of predatory behavior or targeting minors or hate speech and that kind of thing. They were coming to the surface and she would go to the clients or to the company and say, “Do you want us to track this?” and they would say, “No, because we would be legally liable, and so we want you to put in systems where we don’t ever have to hear about it.”
10:52
There were other examples of this, lots and lots of examples where Juliette finally said, “Okay. This is a real dilemma. Companies are not fully able to manage it even when they want to because of the nature of the systems. So what comes next? What do we do?” She wrote this up in a dissertation. I was working on a book series at the time. She showed it to me. I said, “You have a book here,” and that was the one book in the series that the publisher was interested in and she asked if I would be her co-author. Now here we are. I knew a little bit about the technology, but now I’ve been immersed in it now for two years.
Four logics of power
Something that strikes me in your work is that so much of what you say in the book reminds me of things that have to do with the culture of organizations. I’ve worked as a strategy consultant for huge organizations, these multi, mega organizations, global organizations on their digital workplace environment and techniques and communication. I just have seen, I’ve done that for over 20 years. I’ve stopped now, but for over 20 years I’ve seen without even talking about AI so many of the dilemmas that you’re talking about. One thing in your book that really struck me and I wish I’d known it 20 years ago was the four logics of power. I think that underlies a lot of the problems that organizations have. Can you talk about that a little bit?
ART 12:31
That was at the basis of Juliette’s thinking when she was putting it together. So when we make a decision about AI, when a producer or a user or a company or an individual makes a decision about AI, how are they thinking about it? Where does that thinking come from? There are four ways of looking at it, four perspectives, and each one has a logical basis. The perspective of the engineer is that the work has its own quality. I am loyal to the quality of the work, and I am loyal to the people who pay me because, after all, that is the nature of this kind of work. It’s a craft work and often either for hire or buy hire. There’s a real respect and professionalism involved in that, but it does not include consideration of outcomes.
13:26
There’s no Hippocratic oath equivalent among software engineers. They’re not even trained and responsible as engineers in the same way that a mechanical engineer is responsible for a bridge not collapsing. They don’t get certified in the same way. The premise is that they’re going to create great code, put it out there, you can’t have it perfectly bug-free right from the beginning. So you have to test it in the real world with real use, and then you find out what it’s good for and sometimes you find out what it’s good for that you didn’t expect, and that’s part of the joy of software, part of the epitome or software quality. That ethic is very much part of the engineering code. It’s great unless the unintended consequences start to affect people.
14:18
The next logic is the business logic. We are here to survive as a business, grow as a business, make profit and return investment to our shareholders, make cashflow, basically keep people solvent and employed and getting richer. That is great as long as it’s not at the expense of others. The things in business logic, the inherent desire to maintain a good reputation is generally not strong enough to withstand the other pressures. I’ve worked with and in a couple cases for companies that had very lofty ideals, and the question is, what happens when the ideals clash with the immediate need for the core group in that company to get wealthier and have status, et cetera? The ideals, they’re weaker. So what do we need to do to strengthen them? There’s a lot we could say about that. It’s basically around the business logic.
15:29
The third logic is the government logic or the regulatory logic, where we have entities that are responsible for what happens within a geographic area. They have to administer the needs of people there, they have to manage the defense, they have to regulate things that might turn out to be problematic. So a lot of governments, certainly the United States with Biden’s executive order in October and the European Union with the pending AI Act and Canada and many others are all now looking at how do we regulate this thing. In order to regulate a technology, you have to have experts with the government and the government can’t pay the experts as well as private industry can. So there’s always concerns about being left behind. The government, therefore, has a logic which says, “Despite all these pressures, we have the responsibility and the force of law, and it’s up to us how we wield it, how we manage it.” If the government is trustworthy, then it’s fine.
16:43
In the book, we tell a story about an abuse scandal in the Netherlands.
JANE
I remember. That’s a very impressive story.
ART
We’ve learned about it from someone who was caught up in it. She was a consultant and an academic and she was affected by it and her accountant was, I believe, threatened with jail. Then we looked into it, and 26,000 people were affected by a government program, an AI-based program that essentially used predictive analytics to say, “These are the people likely to commit child welfare fraud,” which happen to be single parents, people working more than one job, and immigrants from Morocco and Turkey in particular. 26,000 people were forced to return benefits. They couldn’t afford it. Many lost their houses. There were bankruptcies, broken marriages, and 1500 children were forcibly separated from their parents. This was a government action by a government in the European Union, the Netherlands that is charged with helping its citizens have a better life.
17:58
So one of the things we learned from this is that the AI regulations and uses by government are as good or as bad as the decision makers there, but they have the perimeter that we have to be responsible. That’s the government logic.
18:15
Then the fourth logic is the logic of social justice. What’s going to happen to people and who is looking out for them, and particularly for the vulnerable, people who will lose their livelihood, their homes, their children or just be mistreated by the system in ways they don’t even know about if there isn’t an opportunity to question? When those four logics come together in a room talking about what are we going to do, you have the creative friction we write about where you can get through a dilemma reasonably.
Ownership, control and monetization of data
The idea of creative friction is a very strong idea. In fact, I had some questions about that a little bit later. Before we get there, Art, could we talk about the … I’ve got too many questions, we’re not going to get through them all. We’ll get to some of them. I’ll just pick my favorite ones. One that really … First of all, I’m skeptical that a lot of these good things can actually happen. Maybe I’ve been around the block often enough. I’m into my retirement years, and there are just so many things that I have seen and my consulting work over 20, 25 years has showed me so, so many things. I’ve worked for a number of UN organizations. I’ve worked with the peacekeepers, then I worked with a lot of organizations, pharmaceutical companies. You can’t get more extreme than that. I’ve just seen a lot of different cultures. That’s what makes me skeptical about working things out.
20:06
I wanted to talk to you about one idea that came up in your book about, it’s in the chapter called Reclaim Data Rights for People, and you talk about monetization of data and how people should be able to … You don’t actually say, “I don’t think that they should be able to,” but you talk about possibilities of people being able to be paid for the data that they contribute to the big system. Could you talk about that a little bit?
ART 20:31
I think payment is the front edge of this. Let’s go back to the question of control because we never quite nailed that and then I’ll go from there into payment.
JANE
Okay. Good.
ART
Control is tough because we talked to Sheena Iyengar, a management professor at Columbia who studied this at length and she said, “Control is always difficult. If it’s easy, it’s probably not real control.” Clicking a button to get access is not real control. You don’t know what other forces you’re setting in motion. So either you give up or you monitor it rigorously, but you pay attention. When it comes to your data, we are our personal data, as Juliette says. We see one another first through what we’ve said, what others have said about us, what we’ve done. We can typically be tracked. Our history is available so people know us through our data we’ve generated, and especially now with misinformation rampant, and with it’s so easy to fake photographs and videos, the reliability and veracity of data is more and more uncertain.
21:49
Therefore, when I say something … So in the ideal universe, if I said something, wrote something, had something out there about me, it should be easy for me to control and monitor and manage what is done. There should be a platform, probably using AI to make it easy for me to navigate the technicalities necessary to say, “I want this data to be restricted and that data to be open.” AI could very easily help with that kind of thing if it were trustworthy. It’s a small step from there to say, “Well, if my data is highly valuable to a pharmaceutical company, I should get a few cents every time they use it or someone uses it.” Is it worth it to put that kind of system in place for one or two transactions? Maybe not, but as a way of thinking about all transactions.
22:51
In the same way that the URL is a universal way of identifying an individual, more people are probably identified by their email addresses and then by their physical addresses these days. I don’t know that that’s true, but it wasn’t a way to identify people 40 years ago. Now, the same may be true with our data. It may be that the idea of owning our data is difficult to fathom or could that ever happen and there would be a lot of resistance against it, but at scale, when you think about the ways in which data tags us and we use it and we are identified by it, it almost feels inevitable that at some point either things move in that direction to a large extent or life is cognitively very different than it was.
JANE 23:51
In the book somewhere, and I don’t know if it’s directly related to this or not, you talk about someone who has the idea of collecting, of defining certain guidelines about behavior of organizations, and when organizations don’t follow those guidelines, it might have to do with the climate or refugees or something like that, then they should be punished as we would say “entre guillemets” in French.
ART 24:20
We talked about the AI Act of the European Union. So it wasn’t so much about general corporate behavior. I think an index to general corporate behavior, that is called regulation. That’s a whole other subject and a whole other book, but what we did talk about was specifically AI regulation where you’re trying to regulate something whose effects are uncertain, unknown, and perhaps uncontrollable. So that’s a different regulation than after the fact we saw this abuse, therefore you’re not going to do it anymore.
Lifelong responsibility for data
I see in my notes it’s from someone called Casey Cerretani. He says, “Pick a vector, climate change, human trafficking. You could pick any of these atrocious situations that we’re facing as a global community … and then hold accountable companies that are directly or indirectly aiding or abetting some of these. From there, you could not sell software to them ever,” and therefore blocking them from acquiring data is how I understood it.
ART 25:26
Well, that is what Salesforce is doing on paper right now. Their license agreement with anyone using their software says there’s a list of things that they don’t want their software used for. One of them is military applications, I believe, and the fraud and surveillance that leads to targeting of ethnic groups are on the list, and there’s a lot of other things on the list. The question is how do they enforce it. Do they actually say to a company that’s putting out surveillance cameras and software that targets groups, “I’m sorry, you can’t use Salesforce.com anymore”? Do they give them three strikes? With regulation of any sort, the issue is enforcement.
26:21
Casey’s point, however, is that the use of AI in the regulation and tracking changes the game. I don’t think there are as many secrets possible now as there were in the past. It’s too cheap and easy to uncover them. I could be very wrong about that, but it is certainly a technological race to break into secrets and then to reveal secrets.
A better world: secrecy or not?
Let me tell you a story. I teach scenarios and we do a lot of scenario work at KPI. I have been teaching a class at New York University on the future of media and digital media for a long time. About 15 years ago, I started asking my class this question. Imagine it’s the future and you have a good marriage and you have children, and it would really be terrible if your marriage broke apart, and yet one day you’re seen leaving the wrong place at the wrong time with the wrong person, and a camera on an automobile strolling by, driving by takes your picture, it gets automatically posted. There’s a bot on the web that is tagged that knows your image and it associates your face with it. One of your cousins has another algorithm on something like Facebook that automatically posts any picture involving you and your marriage falls apart.
28:12
At the time I said, “Is that a better world or worse world than the one we live in?” At the time, that was the future when I first asked that question. It was in, I think, the early ’20s, 2000s. Just about everyone in the class said, “It would be a better future. There would be no-”
JANE
Sorry. They said it would be a better future?
ART
Yeah, there would be no secrets. Everybody would know everything. We wouldn’t have to worry about who was doing what. It would be terrific. I kept asking the question every year. I took a couple years break and I came back and asked the question in 2013, 2014, and now it was no longer the future. Now, it’s getting close to the present. There are, by 2015, there are surveillance cameras everywhere. Facial recognition is highly developed and people are posting things automatically. There are automated systems. Now, people are saying, “This can’t get any worse. If this gets any worse, it’s going to be really a problem. This is a much worse future.”
29:23
By the way, we’re heading towards it full steam ahead. That’s what they said in my class in 2014, 2015. Now, I don’t even talk about it because it’s already here. Now, it’s the question is what happens when you don’t even need the car riding by with the camera because the whole image can be manufactured and [inaudible 00:29:47] difference between that image and some other image. If you’re walking out the door, you can say that was a deep fake. If you’re suspicious, you can say, “But who would want to do that?” and you’re into a whole other realm of substantiation and what is reality conversation. That’s where we are now.
Regulation on a global scale
Do you think it’s going to be possible on a global scale to regulate this?
ART
The EU AI Act has a really intriguing way of answering that question. They say they divide AI into four categories. Minimal risk doesn’t need to be [regulated]. Limited risk is like if it’s a bot and it’s doing therapy with you, it has to tell you it’s a bot. It can’t pretend to be a human being or if it’s a friend online or whatever and it’s a bot, it has to identify itself as AI. Otherwise, it’s forbidden. Third category is the high risk, the self-driving cars and all of the predictive analytics that are used for all sorts of useful purposes and a lot of the research. There what the EU says is it has to be audited in the same way that a publicly held company is financially audited.
31:06
Now, we have to have externally, technically trained auditors. There is an emerging cottage industry of potential auditors either for acts like the AI Act or for legal liability who are now brought in to companies often against their will to do some due diligence around AI and auditing. It’s going to be a major profession, I think.
31:36
Then finally, there will be something banned. There will be bans. Now, generative AI was banned in Italy for a month and then they rescinded the ban. It’s very hard to ban software because it’s so useful, particularly this software. So there will probably be a very few things that are banned. If the EU bans it, they can apply that ban to companies who do business in the EU regardless of where else they do business in the world. They see that as a constraint on some authoritarian countries. It may be that it leads to a cold war between the EU and its friends and all other countries, who knows, but that’s a scenario question.
32:24
The end result is that there will probably not be universal rules about AI just as there are not universal rules about weapons, but there are some universal agreements on things like chemical weapons and nuclear weapons where people just agree it would be too horrible to let this run rampant.
32:53
Now, the difference is that AI is cheap, easy to use, and in many ways accessible. Then we get into the open source question. Is it better to allow people to exercise individual control over this or does it need to be controlled by some top notch hierarchical watch robots, companies watching over the robots, governments watching over the robots, but then you have the question, who watches the watch robots? It really is a dilemma. The cat is out of the bag. How do you catch the cat and keep the cat from doing real harm?
JANE 33:37
I don’t think there’s an effective global governance that involves everyone in the world or all countries in the world because we always get back to the interests of given nations.
ART 33:53
Well, Jane, if you believe that, then there’s no point in this conversation or any other. We might as well just keep tapping away at our keyboards until the bomb comes and takes us away. When confronted with this kind of thing, I think my optimism comes from a couple of places. One is what’s the alternative, and the alternative is dire enough that I have to assume that enough fellow members of the human race are thinking and acting about this that something will emerge. It always has. I believe Thornton Wilder’s play, The Skin of Our Teeth.
34:31
I think about it as like driving down the highway and you see accident after accident on the left side of the road and you think people don’t know how to drive, we are doomed as a society, but you don’t notice all the cars that are staying in lane following the rules because they’re going to get caught, but it’s not just because they’re going to get caught, they don’t want to break the flow. They’re in that flow because that’s what works, that’s what makes life better. When people have a stake in life being better, then things tend to work because people make them work.
35:11
AI is no different. It automates the biases people already have, the things people already do. The nice thing about having it automated is it brings it to our attention so that we can see what our impulses are leading to, and that gives us more feedback about the damages and unintended consequences of the things that we as individuals do. That may turn out to be very helpful in the same way as seeing the red light flashing behind you is a forcing function even if it’s not intended for you.
Bright swans for a bleak future
Well, I’m glad you’re more optimistic than I am. I would love for that to be the case, and my feeling is that I’m not so sure that it’s likely. Now, Art, I found a thing you wrote not in the book, on your kleinerpowell.com that I found fascinating. You mentioned scenarios earlier, and I was going to ask you about scenarios, but we’ll just go straight to this one, your bright swans.
ART
Oh, right.
JANE
I found that one, hopeful scenarios for a bleak future. I found that that article was very, very interesting.
ART 36:22
Well, maybe we should pick that up. The idea is that things that work great are often unexpected. During the pandemic, there were those few weeks when Venice’s water cleared up because industrial activity stopped, and all of the toxic emissions and pollution came back with a vengeance everywhere, but it demonstrated what the alternative could be. That would be an example of a bright swan. Another example of a bright swan, the ways in which people are settling outside of urban centers by choice when remote work makes that possible. Few people expect that it’s very disruptive, there are going to be real estate values rising and plummeting in turbulent fashion for a while, but it will become a new normal that will probably be better for humanity people.
37:32
There will still be cities and reasons to live in cities more than ever, but the problems associated with cities are it’s now they’re starting to, some of them like transportation are starting to unravel. Other things have to happen. Conversations like this one, we weren’t talking about AI this way 10 years ago, and lots of things like that. The awareness of what it means to be trapped by a system is spreading that it’s terrible that people are trapped by systems, but the fact that they’re aware of it and that in democracy that’s not supposed to happen. The government answers to its people as a whole, not parts of them, not the well-connected or those who are in the right party. That’s the principle that I think is emerging as a way of looking at things.
Misinformation and democracy
Interesting. You mentioned democracy. You know Yuval Noah Harari who wrote Sapiens? He talked about democracy. I’m just going to read a short piece to you. “One of the strengths of democracies in the late 20th century is that it was better at data processing than dictatorships. Authoritarian governments concentrate information and power in one place, whereas democracies diffuse the power to process information and make decisions among various institutions and people.” He goes on a little bit and then he says, “AI, on the other hand, makes it possible to process huge amounts of information in one central location.” So he then concludes that, “It might make centralized systems, authoritarian regimes, more efficient than democracies with distributed data processing.” Then he finishes his paragraph with, “If democracy does not reinvent itself, we may soon be living in digital dictatorships.”
ART 39:37
Well, he uses the word may and should quite a lot in that passage, and we don’t know. We don’t know. When we look at the future, we have to think about trends like that in terms of how much is it happening versus for the bigger the wave, the stronger the undertow, which we’ve seen that with populism. The bigger the wave of globalization, the stronger the undertow of populism. Still, when populism becomes too big a wave, then there’s an undertow against that.
40:12
The issue with AI and the whole reason why misinformation is so important is not because misinformation erodes trust, although yes, that too. It’s not because misinformation can be used to target particular individuals, although yes, that too, but because accurate information leads to better decisions and really terrible decisions are often made with non-accurate information. We’ve seen that in every authoritarian regime.
40:46
The most obvious example being the invasion of Ukraine, where misinformation about Russia’s chances led to what appeared to be some really, really ignorant decisions, ignorant decisions. A world run on ignorant decisions falls apart. A world run on well-coordinated decisions where the data is representative of the entire population of people or anything else being studied, those decisions turn out to be useful. Those decisions require work. Truth doesn’t just show up and it doesn’t spring from our imagination, and it often doesn’t spring from research in the way research is conducted. It takes real work to have verification and data that is worth making decisions around.
41:44
The AI dilemma is how much of that one dilemma is how much of that real verified information is now allowed to surface and be set as a priority over misinformation. The big thing that real information has going for it is it works. It leads to better results and people can see the results, maybe not immediately, but over time, and it’s hard to mask them. It’s possible, but it’s much more difficult than people think it is.
Can AI improve education?
I’m thinking about education and I’m thinking about education for young people in particular. What changes, if any, would you bring to the current educational system, let’s say in the United States because we can’t go country by country around the world, and I think the educational system has a lot that can be, I would say change, but I want to say improved in the way it functions.
ART 42:51
A nice, small, easy question. The education system is so complex, and I co-authored a book called Schools that Learn that covers a lot of these issues in depth. One of my co-authors, Peter Senge has spent much-
JANE
Oh, Peter, yeah.
ART
He spent much of his efforts in the last 10 years working with educators, and I’ve done some of that work with him. His voice, among other voices, basically say a system of education that is intended to raise the quality of life for everyone in it based on their unique needs, this isn’t his words, this is my words, but a system of education that matches what people want to learn and need to learn and does that individually but does that with a community and with a sense that of high respect for people just because they’re people, which is the real meaning of equity, that system does not currently exist as hierarchically constituted by the politics of boards of education in the United States and by the administration and by the education of educators and all of those establishment issues.
44:14
So there’s a lot of administrative things, bureaucratic things in the best sense and the worst sense of bureaucratic all put in place. The school system reflects all of that, embodies all of that. Then there’s this other school system, which is teachers connecting with students and students connecting with other students and administrators connecting with administrators. It wasn’t so long ago the idea that a traumatic childhood was normal was prevalent, the idea that parents would be children just to get them to cooperate, just to get them to obey was normal.
44:57
That idea, it may still hold in many places, but if you say it out loud and say, “I believe that children should be … Spare the rod and spoil the child,” and mean it sincerely, that is embarrassing to say now. Many of us have now come to realize that the issues we have as adults come from the trauma that was imposed on us unintentionally just from growing up in a brutal world. So now that that is no longer tolerated in words, people are now translating that experience and understanding into action and it’s affecting education from the bottom up in many places, not all and not perfectly, but it’s a change. Enough people are doing it enough of the time that it makes a difference.
45:59
One of the really interesting questions about AI is, does it get used to put people back into straight rows or does it get used to give people the tools they need? Who’s in control? If we’re going to put kids in control of their own data, are we going to give them the skills they need to use that control effectively? Are we going to give them access to the tools and are we going to trust that they’ll use the tools effectively? If we trust them to rise to the occasion, will that trust turn into a self-fulfilling prophecy?
46:38
JANE, nobody has the answers to these questions. We’re going to learn by trying and we’ll probably try different things in different places, we being all the people who try things. AI is going to make it easier to do that.
AI as a companion for learning
I talked with … Do you know the Chinese sci-fi writer Stanley Chen?
ART
Yes. I wrote a review of his book recently.
JANE
I interviewed him 10 days ago. He was talking about AI and education. He talked about how AI, ideally every child would have an AI companion or tutor, whatever you want to call that would help the child discover what is the things that he wants to learn about, what interests him, what’s his curiosity. Stan talks about it because he wrote a book in Chinese called Net Zero China in Chinese, and it was really, really a big deal in China because it’s teaching Chinese children about net zero and the importance of the environment and so on and so on. He gets invited to a lot of schools and he does Zoom sessions with tons of kids online.
47:42
He said, “The problem is that many of them don’t have a sense of curiosity anymore. They don’t know what they want to do. That isn’t very strong, and the system takes it out of them to some extent.” He thinks that with an AI, maybe he used the word companion, that would help a child because every child is different, has different direction, different ideas and needs to develop in specific ways. He thinks AI can play quite a role in that.
ART 48:08
It’s possible. I don’t think AI has that agency without human support. The end of 1984 … 1984 is not that predictive, but one of the things that is highly predictive is at the very end, the two main characters have been taken past the point of no return. They used to be lovers and now they look at each other and they say, “I betrayed you. I betrayed you.” The idea was that they no longer have what Chen calls curiosity anymore. They get to a point of no return and it’s gone.
48:53
Ed Schein, in the beginning of his career in the ’50s, he worked with … The late amazing Ed Schein, he worked with prisoners of war in the Korean War. They were indoctrinated. This was where the word indoctrinated came from. They were made to believe that the Chinese communist system or the Korean communist system was better than the American system, and that democracy was full of dupes. They believed it wholeheartedly, regardless of where they’d been brought up as long as they were in the camps. As soon as they got back home and they saw things differently, it was was a very rapid return to normality in their attitudes and point of views. The environment and the context matters a lot when it comes to people’s thinking and to their curiosity.
49:50
So the big question I would ask Chen is, does he think that this is a permanent thing or does he think that it would change, and is he saying subtly that the introduction of highly interactive AI systems would be a opening a window in a room where this has not been allowed before? Does he think multiple perceptions or more open perception would get in and spark the curiosity? If he does think that, what would it take for it to be discovered and would it remain?
JANE
Good question.
ART
Would the AI outwit the efforts to control it in this regard, and would the AI then substitute its own form of control if it was that clever? Now we’re in the realm of what capabilities does an algorithm ultimately have, and that is the experts in the field can’t answer that question, but I think that’s the question. I think that’s a scenario question. Maybe they can, they can’t, but it would be good to be prepared for all.
Art’s next book: conversations and evolving understanding
My final question to you is, what is your next book going to be about?
ART
Let me just plug this book first. It’s still on sale. It’s just a few months in print and it’s getting a lot of response. We do advisory work along with it. kleinerpowell.com or the aidilemma.com are the ways to learn about it. The integrity of a book right now means, what is your next line of inquiry? I’m interested and it’s too soon to say what it would be, but I think there’s something interesting in the intersection of machine learning, organizational learning, and human learning and what each of these has to teach the others.
52:15
I think there’s obviously many books will be written about gen AI. I think our seven principles are the seven dilemmas. I think there probably is … We’re working now on what are the solutions. As with this book, the dilemmas, we didn’t invent them. We drew them from what people are working on and trying to do. I think that practice of the solutions is now emerging. We’re working on some things ourselves like how do you calculate, can you quantify the intangibles of risk and opportunity costs for AI systems, where the feedback loop is faster. So we’re doing a lot of work in that regard. I think, ultimately, a book will be more like a hub than a book. A really good book will be a book and a series of conversations, probably a course and evolving understanding.
JANE 53:23
Well, that’s what I decided when I wanted to write a second book. I talked with my son about it and he convinced me a book was not the way to go, that I should go for a podcast and videos, and that’s what I’ve done. The website, Imaginize.World, and .world is the domain name. I didn’t realize you could .world as a domain name. So Imaginize World, for me, is my current project that for the moment, I’ve written a number of articles. I did a research program with 200 people around the world, but the main part of it are these interviews I’m doing with people like you and putting together what I like to call a living book.
ART 54:01
The issue is how to make it into a livelihood so that it supports you. Then in order to do that, you have to justify it to someone. If you justify it to a large paying audience, then that involves attracting them. If you apply to a sponsor, that involves having the trust and the capability. Some people have really made that work. It is less expensive than many think, but it has to support-
JANE
It’s a lot of work.
ART
Well, it has to support three or four individuals for part or full-time, and it turns out to be a prodigious task as it always has been.
JANE
As it always has been. Well, Art, it’s been great talking with you. I hope you’ve enjoyed it as much as I have.
ART
Perfect. Thanks, Jane. It’s great to be in touch with you in this way.
JANE
In a real conversation. Exactly.
We talk with forward thinkers, scifi visionaries and pioneering organizations about people and society, AI and humans, the earth and survival. Read more Imaginize.World
Subscribe on your favorite Podcast app