The AI Dilemma
Dilemmas are hard choices
I think the title that you chose, The AI Dilemma, is a very powerful title, and I looked up the word dilemma to really get a hold on what it is. I know you have to make a choice, but the definition actually includes the notion of a difficult choice to make. It’s a troublesome thing. It’s not just a choice. It’s a situation where it’s not clear at all what’s right and what’s wrong, what’s good and what’s bad. And so you’ve got this idea of a dilemma in your title, and then you go on to reassure people by saying the exact words that you use are seven principles for responsible technology. So it’s like, we’ve got this problem, these principles are going to help you. Was that your intention with the title?
That was exactly our intention, and we did not pull the book out of the air. It was based on Juliette’s work, and she has a large network of people in the technology industry and big tech and entrepreneurial tech. I have a longstanding background writing about and thinking about automated systems and computers and business and management. She was at Columbia researching what became a dissertation there that had to do with whether companies could be trusted to regulate themselves around artificial intelligence, and the short answer was no.
Yeah.
The track record not great. The long answer is, however, there are ways in which we have to. In the right hands, AI does amazing, miraculous, powerful things. It is not going to be postponed, let alone eliminated. It’s here to stay and for good reason. We’re already dependent on things that AI delivers, we, humanity. And in the wrong hands, it can be devastating to individuals and potentially to large numbers of people.
Four logics of power – the engineer logic
So when we make a decision about AI, when a producer or a user or a company or an individual makes a decision about AI, how are they thinking about it? Where does that thinking come from? And there are four ways of looking at it, four perspectives, and each one has a logical basis. The perspective of the engineer is that the work has its own quality. I am loyal to the quality of the work, and I am loyal to the people who pay me because, after all, that is the nature of this kind of work. It’s a craft work and often either for hire or by hire.
But it does not include consideration of outcomes. There’s no Hippocratic oath equivalent among software engineers. They’re not even trained and responsible as engineers in the same way that a mechanical engineer is responsible for a bridge not collapsing. They don’t get certified in the same way. The premise is that they’re going to create great code, put it out there. You can’t have it perfectly bug-free right from the beginning. So you have to test it in the real world with real use and then you find out what it’s good for and sometimes you find out what it’s good for that you didn’t expect, and that’s part of the joy of software. And that ethic is very much part of the engineering code. It’s great unless the unintended consequences start to affect people.
The business logic
Next logic is the business logic. We are here to survive as a business, grow as a business, make profit and return investment to our shareholders, make cash flow, basically keep people solvent and employed and getting richer. And that is great as long as it’s not at the expense of others.
The government logic
The third logic is the government logic or the regulatory logic, where we have entities that are responsible for what happens within a geographic area, and they have to administer the needs of people there. They have to manage the defense. They have to regulate things that might turn out to be problematic.
We are our personal data
When it comes to your data, we are our personal data, as Juliette says. So we see one another first through what we’ve said, what others have said about us, what we’ve done. We can typically be tracked. Our history is available, so people know us through the data we’ve generated, especially now with misinformation rampant, and it’s so easy to fake photographs and videos. The reliability and veracity of data is more and more uncertain.
No more secrets?
Let me tell you a story. I teach scenarios, and we do a lot of scenario work at KPI. I have been teaching a class at New York University on the future of media and digital media for a long time. And about 15 years ago, I started asking my class this question, imagine it’s the future. You have a good marriage and you have children, and it would really be terrible if your marriage broke apart. Yet one day, you are seen leaving the wrong place at the wrong time with the wrong person, and a camera on an automobile strolling by, driving by, takes your picture, it gets automatically posted. There’s a bot on the web that knows your image and associates your face with it, and one of your cousins has another algorithm on something like Facebook that automatically posts any picture involving you, and your marriage falls apart. And at the time I said, “Is that a better world or a worse world than the one we live in?” And at the time, that was the future when I first asked that question, was in, I think, the early ’20s, 2000s. And a shocking of just about everyone in the class said it would be a better future. There would be no…
Sorry, they said it would be a better future?
Yeah, there would be no secrets. Everybody would know everything. We wouldn’t have to worry about who was doing what. It would be terrific. And I kept asking the question every year, and I took a couple years break and I came back and asked the question in 2013, 2014, and now it was no longer the future.
Global regulation
The EU AI Act has a really intriguing way of answering that question. They say they divide AI into four categories. Minimal risk doesn’t need to be rated. Limited risk is like if it’s a bot and it’s doing therapy with you, it has to tell you it’s a bot. It can’t pretend to be a human being, or if it’s a friend online or whatever and it’s a bot, it has to identify itself as AI. Otherwise, it’s forbidden. Third category is the high risk, the self-driving cars and all of the predictive analytics that are used for all sorts of useful purposes and a lot of the research. And there, what the EU says is it has to be audited in the same way that a publicly-held company is financially audited. Now we have to have externally, technically trained auditors, and there is an emerging cottage industry of potential auditors, either for acts like the AI Act or for legal liability, who are now brought in to companies, often against their will, to do some sort of due diligence around AI. And it’s going to be a major profession, I think.
AI and education
The education system is so complex, and I co-authored a book called Schools That Learn that covers a lot of these issues in depth. One of my co-authors, Peter Senge, has spent much…
Oh Peter, yeah, yeah.
Yeah, he spent much of his efforts in the last 10 years working with educators, and his voice, among other voices, basically say a system of education that is intended to raise the quality of life for everyone in it based on their unique needs… This isn’t his words, this is my words. But a system of education that matches what people want to learn and need to learn and does that individually but does that with a community and with a sense of high respect for people just because they’re people and which is the real meaning of equity. That system does not currently exist as hierarchically constituted by the politics of boards of education in the United States and by the administration and by the education of educators and all of those establishing issues. So there’s a lot of administrative things, bureaucratic things, in the best sense and the worst sense of bureaucratic, all put in place, and the school system reflects all of that.
AI will help us build trust
One of the really interesting questions about AI is, does it get used to put people back into straight rows, or does it get used to give people the tools they need? Who’s in control? And if we’re going to put kids in control of their own data, are we going to give them the skills they need to use that control effectively? And are we going to give them access to the tools, and are we going to trust that they’ll use the tools effectively? And if we trust them to rise to the occasion, will that trust turn into a self-fulfilling prophecy? Jane, nobody has the answers to these questions. And we’re going to learn by trying, and we’ll probably try different things in different places, we being all the people who try things, and AI is going to make it easier to do that.
For more from this interview, subscribe to Imaginize World on YouTube or wherever you listen to your podcasts.