Click here to go back to podcast page.
Below is the full transcript.
Setting the stage
Is it too early for us to think about how AI, machine learning in particular will change how we think about ourselves and our world? And the right answer to that question is, yes, it is too early. We need to wait to see what will happen. But I don’t feel like waiting.
JANE
I’m Jane McConnell and welcome to Imaginize World. Today, we need to focus on the future for new generations. What kind of world do we want them to live in and how can we help them build it? Greetings. Today I’m with David Weinberger, a writer whose books and work about technology have influenced many. We talk about the evolution of the web, AI and machine learning, fairness, unanticipation, and other dimensions where technology and humans come together. Good morning, David.
DAVID
Hello Jane.
JANE
It’s very nice to see you in person after having heard you talk online so much and having read a lot of your work. I’d be interested in your describing yourself briefly, because everywhere I look, David, they have a different word for describing you. You’re a philosopher, you’re a technologist, you’re a researcher, you are a writer, all these different perspectives. What would you want people to say? I know David, he’s a…
DAVID
It’s easier to think of things I don’t want them to say about me.
JANE
Okay, that’s a good answer too.
DAVID
But I’m not going to answer. I don’t want to go down that path. Well, I will a little bit. So I have a PhD in philosophy from 1979, which is a long time ago. I taught for five years after that. And then in 1986 I went to work at a tech company and I haven’t done academic philosophy since. And I never called myself a philosopher, because to me that’s like a literature professor calling himself a poet, which they might be, but being a literature professor does not make you a poet and teaching philosophy does not make you a philosopher. So I have never, I don’t think I’ve ever called myself a philosopher. I think I’ve almost always pushed back against it. It got easier when I was 50 because I felt entitled to call myself a writer, which is what in fact I had always been.
I guess, there’s a lot of teacher in me as well. But I finally had a book accepted and it did well. It was co-authored. And at that point I felt, okay. I mean, that’s the sort of thing that let you say you’re a writer. So usually I say I’m a writer. And despite what I just said about philosophy, I know that there are philosophical strains in what I write. My philosophical background matters to me. I mean, it’s part of me. But over the past, I’d say five years or so, I’ve started coming to terms with the fact of that’s actually a significant part of me.
And I’ve been embracing it more in what I’ve been writing, which is why I am publishing much less. That and increasingly shy. Yeah, I’m pretty deep introvert, which doesn’t always manifest as shyness of course, but I’m also really shy and I’m anxious about in the bad way, not anxious too, but anxious about posting what I’ve been writing because I doubt its value and I don’t look forward to being pilloried. It is very simple, I have a pretty imperial imposter syndrome, and I don’t think I’m unique in this, but I think it’s well-earned. So short answer, I’m a writer. I write about technology.
JANE
Okay, that’s good. I say that it’s good because Imaginize World is about how people imagine the future in order to build it. And I’m orienting the direction more and more towards thinking about youth, about the future generation. How can they be inspired? What should they be thinking about? What should they be doing? What can they do to create the kind of future that they want to have?
The evolution of the web
One of the most crucial questions, for a long time, I’ve been a member of a network, started out as a little conference, but it’s a network, leased in the year 2000, where the founding question, which people, lots of things come up, but they stick to it pretty well is, what is the internet that we want to leave our children? So this was in 2000, AI was not yet the thing, and the internet seemed to be tremendously important as I think it is. Its founding values were under threat even in 2000. That’s a subset of your question, but it’s the same sort of conversation. I think they’re really, really valuable conversations. I think they’re essential conversations.
JANE
I had a conversation, one of my guests was Sugata Mitra, and he created a hole in the wall in his office in India and put a computer in it and little kids who couldn’t speak English-
DAVID
I think about that experiment a lot. It comes up frequently for me.
JANE
One of the things he said when I interviewed him was that we are preparing our children for our pasts, not for their futures. Preparing children based on our pasts and not thinking about their futures. He’s got a lot of things to say, and it’s one of the most popular ones on my podcast series for the moment. So he struck a nerve, I think for a lot of people.
DAVID
So when I first saw the video, I think maybe I saw him at a conference. And the first time I encountered him, which I think may be the only time that I’ve seen him in person, it sent… I cut you off as you were describing the experiment.
JANE
It’s okay.
DAVID
As I recall it, and you’ll correct me, he set up a screen in an Indian town, city. Children came around and they figured out how to use it and how to make their way through the internet without any instruction
JANE
And they didn’t speak English in the first experiment.
DAVID
Which is really, really surprising because I have tried to teach, I won’t name them, but elders when I was younger and was not one of them. So 25 years ago, tried to teach them how to use a Mac just for personal use. And mainly they wanted it for the internet. And I could not get through all of the cognitive cruft, all that they brought to it. Not being able to differentiate a browser window from a tab of a browser. There weren’t tabs at the time I think, but from a pop-up message, they were all pictures to this person and there was no context or understanding that this is a picture that you can interact with, this is an icon on the desktop. It’s a picture too, but it acts completely different than most of the other pictures you see, including ones that look like it on a web page.
They eventually figured out how to do the things they wanted to do, it was mainly email. But it was really, really hard. So this was an older person, he was in his 70s. And so it surprised me that people who were more or less blank slates when it comes to tech, kids who haven’t seen a computer before, that they got it without instruction. And maybe I should not have instructed them, maybe should’ve just thrown them into… So I’m still not sure what to make of that. And I’m also, it is certainly, and it’s a wonderful and really important point that we are imagining the future, our future for our children, and that’s extremely limiting.
Just as trying to make the shift from no computers to computers is really, really cognitively difficult for people who coming into this fully formed, I mean they’re adults. In the same way, it’s very limiting to imagine the future of the youth, but we don’t have a choice. If we’re going to imagine the future, we can only do it from who we are. We can keep ourselves open to learning, of course. Old people, even just adults now, we are inevitably we’re products of our past. We can’t ever shake it entirely. Which raises the question I think a bit. I mean, who should be imagining the future for our children? We sort of have to make decisions.
JANE
Or maybe our children should be doing it.
DAVID
Absolutely. The question is, they’re going to become adults under our tutelage when they’re imagining is going to be better grounded and have more effect, they can do something about it. Five-year-olds can’t do anything about it, but the 25-year-olds can. They can invent things, they can legislate things. So it’s always going to be really, really messy. That’s why breaks and how we think about the future, usually they’re not as abrupt as we might think they are. In paradigm shifts, a lot of the old paradigm comes along because it has to. We can’t start over from nothing. Pretend we don’t know anything, we don’t have any presuppositions or theories or desires. Can’t, just can’t do it. “You can’t jump over your own shadow,” as I think Mark Heidegger once said. But that’s where we are.
David’s books over time
And I think we need to learn from the past. We can’t reject it or ignore it. I think it’s important to know it, but be aware that is the past and that we’re constantly, things are constantly changing. That’s one of the underlying messages throughout your work. If you would give me the privilege of simply summarizing my thoughts just in one sentence for each book. In 2001, you co-wrote The Cluetrain Manifesto and your subtitle was, The End of Business as Usual, and that the web is fundamentally a social space. I think people would agree to that today. Then you go on in 2003, and I love this title, Small Pieces Loosely Joined, which is very interesting, but you go on and say, A Unified Theory of the Web. And you talk about how we have all these small pieces, human beings are connected and how the central point that used to link them all together has been removed now.
And so now we’re in a different kind of web. Then you go on to Everything is Miscellaneous: The Power of the New Digital Disorder, and that’s a rather, how would I say? Provocative title in that you’re saying there’s a disorder digitally speaking and it’s powerful. A thing you say in there is that deciding what we believe is up to us now. Because there’s no one way of organizing everything that’s out there on the web. Then you move into Too Big to Know, that’s in 2012. And this one is Rethinking Knowledge Now That the Facts Aren’t the facts, Experts Are Everywhere and the Smartest Person in the Room Is the Room. That’s an extraordinary title. How did you come up with that, David?
DAVID
The titles were really hard for me because most of the process of writing the book is figuring out what it’s about. So Everything is Miscellaneous, I kept track of its titles and I can’t find the list anymore, but it was at least 20 of them over the course of the year or two writing it. And at one point each of them reflected what I thought the book was about. So the book actually is about no longer having to rely upon a single central taxonomy or classification or set of categories, that we’ve all agreed is real and saying none of them are exactly real, many of them are useful. And with the web we are able to create, for example, we’re able to tag things. So rather than saying that, “This thing goes on this shelf, it’s a book about history of cooking in the Civil War.” Does it go on history shelf, the cooking shelf, where does it going?
We don’t have to pick a single place for a book because they’re digital and we can just tag them with whatever it means to us. Maybe it’s great grandpa is what we’re going to tag it because he was a great-great-grandpa, whatever. And we can give it as many tags as we want. And that dissolves the… Is so much more useful almost always than a single arbitrary, well not arbitrary, but a single classification scheme. So I thought that book at the beginning was about the power of metadata. Power of metadata is not a great title for a book that’s trying to get readers.
JANE
No, it’s not.
The role of titles and subtitles
And so I went through 20 different iterations, not just of the title, but what it was. It was a defense of messiness for a pretty long time, which is much better than anything with the word metadata in it. So the Too Big to Know, that one I knew early on was about knowledge and the fact that knowledge, there’s so much now, that we know so much more we know and our systems of knowledge have been set up largely to reduce, because there was only so much before the digital world and the internet that we could know that we we’re able to keep track of.
But now it’s pretty much infinite the amount of storage and accessibility. So I knew what it was about. I think the title was pretty early. The subtitle, the subtitles are intended to be provocative. They let the reader know a little bit more about what the book is about, but they also try to give the reader a sense of what reading the book will be like. It’s going to be a dry academic book about knowledge and authority. Well, in a sense it is, but it’s not an academic book. And I do like the Smartest Person in the Room Is the Room as a phrase. So I was happy to hit on that.
JANE
It really makes me, and I’m sure a lot of people who come across that title or who read the book, think about the fact that it’s no longer a question of individual expertise, it’s a question of together with different people, experts are not experts because you raise the question of what is an expert. That is something you raise regularly in your writing and just a quote that you have from the book is “Internet enables groups to develop ideas further than any individual could.” That’s a strong statement. You’re suggesting that even some of the top thinkers or top scientists benefit from being part of a network directly or indirectly and that the knowledge benefits from that.
Does expertise relate to machine learning today?
Yes, and I think that’s, I’m going to say always the case, but you or somebody who’s going to think of a case of a solitary genius who just came up with it all himself, and it’s hard to deny that that happens. Although, there is also a historical background and context which enabled them to these are the giants that the person sit upon. And it’s so much more so the case now that anybody can participate. Lots of people who don’t know much or don’t know enough, engage in conversations where you’d have to unpack a lot of what they say to help them to see why at least you think that they are actually wrong or whatever.
So there is a lot, and I’ll call that, I call it noise. I don’t mean it in a totally negative sense. Because the person who’s engaged in this has the opportunity perhaps to learn a great deal and likewise for the person who knows more, it’s primarily on the internet where we’re seeing these ideas emerge and then get debated and refined and sometimes bashed wrongly or upheld wrongly. But we all know this and it’s gotten in a sense, less efficient and I’m not sure why it has allowed a lot of very bad ideas, in my opinion, to become taken as codified by large segments of the population. This is just maybe the most obvious and talked about problem with the internet.
JANE
Yeah. Let’s move to 2019 and Everyday Chaos. This is a book I think that could have frightened a number of people. Your subtitle is, Technology, Complexity and How We’re Thriving in a New World of Possibility. It’s a very, very upbeat title and you move into chaos and you talk a lot about machine learning, deep learning, and I think that’s what maybe puts people off, sort of related to AI, people ChatGPT and all that sort of people are taking for granted now. But I don’t think most people really stop and think about machine learning and what it means. And you talk about that quite a lot.
DAVID
Yeah, that’s what I’ve been writing about for seven or eight years. Because I find it fascinating. Most if not almost all of what I’ve written in the past 40 years has been about how tech affects how we think about things, the sort of mental models that we have. Not a phrase I’m crazy about, but it is actually, I think probably the right phrase. The subtitle, I’m sorry, I’m going to go back to subtitles. The subtitle of Cluetrain was, The End of Business as Usual. And in lots of really important ways, business as usual did not end, in lots of important ways it did. And I think we’ve gotten so used to them, we don’t always notice them. But that subtitle would need a lot of work now. If a book came out with that as a title, it would pretty clearly and we could see ahead to 25 years, nope, that’s not what happened at all.
Big companies got bigger, et cetera. But lots of things did change. My point is, that’s the subtitle I maybe regret the most, although it was very effective for the book and we believed it at the time. But that’s because it was a prediction about what will happen and I have no power to predict what’s going to happen. I’ve never thought that I did, I don’t. I don’t know who does, but maybe nobody, but it’s certainly not me. I’m way more interested in how living on the internet as we was starting to do 1999, 2000 changes how we think about how things go together mentally. And that’s always been what I’m interested in, which is to say I’m interested in the philosophical implications of technology. But since I’m not a philosopher, I’m not comfortable saying that. And in terms of writing a book that’s intended for non-specialists, it’s not a good word to include in your subtitle
JANE
You say you talk about how technology affects how things go together
DAVID
In our minds, yes.
JANE
What we see is in some cases cause and effect, for example.
DAVID
Yeah. That’s-
JANE
I mean, I don’t remember if you talk about that directly.
Machine learning, causality and predictions
I’ve just been writing about it so I can’t remember. So that’s a very big one. And machine learning is really interesting in that regard, because it doesn’t know a thing about a cause and effect. It’s not a, air quotes here. “It’s not a concept that it has.” If it had concepts, that wouldn’t be one of them. It’s not part of the model that machine learning builds, either the large language models that are behind the chat AI stuff that has suddenly just exploded the world. Or if I say more traditional, I mean like 2010 machine learning where identifying objects and images was sort of the classical case of what machine learning could do.
It doesn’t have concepts to begin with. But when you get to things that are predicting what we would see as the effects of causes, which is often what we use it for, machine learning for, it doesn’t know. It knows about correlations, it doesn’t know about causality. It can observe that smoking and lung cancer, there’s a correlation there, statistical correlation, but it doesn’t know which came first. It doesn’t know if the cancer causes smoking or smoking causes cancer. It’s just not part of it. And there are people who are very concerned about this and have proposals for how to integrate causal model into machine learning models. It apparently really, really, really hard to do. And that’s the source of some errors, but it also means it just doesn’t think like us, it doesn’t think at all. I’m going to stop backing off of using human cognitive terms to talk about AI because it’s not cognitive, but there’s no other way to talk about it. We don’t have a vocabulary.
JANE
You say that the universe, we have long believed that the universe is knowable, and that because we think it’s knowable, we used to think or many people still think it’s knowable, therefore it’s pliable to our will, you said. And I love this quote where you say, “Evolution has given us minds tuned for survival and only incidentally for truth.” Do you remember writing that?
DAVID
No, of course not. I do remember believing it. I mean this is a very serious conversation right now in fact, in the form of the universe is tuned for life, is the claim that’s being made by scientists, which is a really big claim. And this is part of the argument in favor of believing that can be used to support the argument that we are all living in a simulation. Because a simulation absolutely if we are, is tuned for life. But the idea that the entire universe on purpose and usually the people who are saying this, I think are saying not on purpose, happens to be the one in a gazillion universes in which life would emerge, and conscious life is just so wildly improbable.
And so right now, this is one of controversial emotions that is floating around. I don’t believe it was tuned for anything. I have no standing. I’m not an astrophysicist or a quantum mechanics person or a multiverse physicist, theorist. I’m nothing. But I see no reason to believe or how it helps us to say that the universe is tuned for life or for intelligence. And there’s long been a Darwinian view that yeah, I mean, long, meaningful few decades that you can explain everything we need to explain about the evolution of everything including us. In fact, we work out better for tuned for survival than for truth. I mean, that’s sort of essential to Darwinism.
Fairness in AI models
One of the things about machine learning that you talk about is the idea that we as citizens, people of the earth have the right and should know how it’s working so we can therefore I guess, validate it or agree with it. And you talk about the idea that what’s important is knowing the data that goes in and the understanding of how it works is something beyond us and is not necessary to know. We don’t demand that of other systems. So how can we accept, and I would say celebrate machine learning, ensuring that it’s still, how to say it, you talk about our human truths, our values, that it respects fairness, for example. That’s something you talk about a lot. How can we take control of that? Well, how can we influence that? I don’t know even how to ask you the question.
DAVID
Yeah, those are good ways of asking the question. I mean, we know that AI models are completely capable of being wildly unfair and sometimes obviously, which is at least better than not obvious, but sometimes obviously, but sometimes not. We don’t even know it. That’s really, really dangerous. And I think we keep discovering layers of levels in which it can be unfair in one way or another. One of the good things this has done, I hate to point out the positive side of AI unfairness, but I will because I think it changes how we think about fairness.
Because if you are involved in trying to fix an unfair model AI model, it’s quite likely that you are going to have to face questions that you humans have a great deal of difficulty with. And you will discover as researchers have, as well as practitioners, that, oh geez, there’s maybe a dozen different types of fairness, which is very important to know. But then you have to decide, well, which type is fair? Which type do we want in this case? And we don’t have good ways of deciding that. So can I give you an example? You’re using AI to evaluate mortgage applications because you’re a mortgage lender. And you want to be fair, and so you do your first run of it and looks useful. It says, “Okay, these are the top tier people you should lend to.” But upon investigation or even a casual glance, someone on the management team says, “There are very few women in this top tier.”
And so you go to the AI person who says, who confirms that and confirms that it’s actually not even proportional to the number of women because traditionally, at least in the US, fewer women apply for mortgages than men. But it’s way out of the skew. I mean, it’s horrible. And so you go back and the first thing you do is examine the data because that’s what machine learning is learning from. And you verify as well as you possibly can that no, the data is not biased. It goes through, passes all the tests, it’s representative as far as you can tell, it’s a representative sample of data that this has been trained on and it had no signifiers in the data because you remove them about what somebody’s gender is and so forth. You remove the proxies as well. That is the data items, the column in the spreadsheet, so to speak, that is correlated with gender either obviously or not obviously.
You do all that and you go back and you say, “The data’s good as far as we can tell, data is fair.” And then you propose some ways of fixing it. For example, say, now we could change what the threshold is for getting approved for loan and we could just make it sort of arbitrarily. We will lower it for women and we’ll raise it for men. I can tell you exactly where you tell me what percentage you want and if you want to hit 50/50, I can do that for you. If you want to do 60/40 for men can do that. If you want to do 60/40 for women because of historical injustice and you want to repair that, and somebody for sure in the management meeting is going to say, “Well no, if the data’s good then and it doesn’t know about gender, then it’s gender blind and gender blind means fair.”
Are different types of fairness contradictory?
Well, it means one type of fairness, right? But there’s at least six others that you could pose here, including how many… I’ll just give you one example. Look at the men and women who were rejected and see if more women were unfairly rejected, they should have been approved by any normal measure, but they weren’t. And if it’s a higher percentage of women than of men, then that’s another type of unfairness. Exactly the same as not promoting women because they’re invisible to you or whatever. So there’s at least six, there’s probably a dozen, up to 20, maybe more types of fairness. And fairness in the history of philosophy has been treated like folk medicine by doctors. That is, yes, how many cookies do you give a kid? Right? Is it the same number, different number. And so philosophy has until the 1970s paid zero attention to fairness, and now it is the core ethical question posed to AI. Shouldn’t be the only one, but it is the one that they have conferences on fairness.
It’s really, really good that we are forcing in our encounter with AI, it’s forcing us to face the fact that fairness is really complicated. It’s also forcing us to face the fact of having to specify exactly what would be fair. Let’s say we go with the threshold things, what’s the right threshold? How do you decide that? And we’re very uncomfortable giving the exact number by the way. That’s a really important discussion to have and realization to make. So yes, it can be horribly unfair and I don’t mean to minimize that at all, but for whatever reason, I am intellectually more interested in the effect on our ideas about fairness, in part because there’s so much great work being done on the effects, the practical effects of this unfairness that I don’t feel like I have anything to contribute to that.
JANE
Well, that’s very interesting what you’ve said. The situation I was thinking of is a very simple and quick to describe, and I’m curious to know how you will interpret it. It’s something I read a long time ago. It had to do with self-driving cars and if the self-driving car is in a situation where it has to make a choice between I’m going to hit this old man or this young girl or these children or the pregnant lady, which do I hit or which one do I avoid in a context where you can’t stop the car fast enough that someone’s going to get hit. And so you have to make that decision about, maybe it’s a question of, I don’t know if it’s fairness, but not really fairness, that’s not right, it’s a question of how do you make decisions like that?
DAVID
What moral framework are you going to use? If it’s utilitarian, then you kill the old people first because they have less to contribute. If it’s a question… So there was some grad students at MIT who 10 years ago now, boy, time flies, set up a site called the Moral Machine. It asks you to make exactly those decisions, 10 instances, different sets of people randomly described as a healthy jogger or cancer, somebody with cancer or an old person or a young person or a nun, et cetera. And it is valueless from my point of view, and I think they agree as settling any moral arguments, but it does have value and they broke down by culture and they notice things like statistically in China, they value the elderly more than typically in the West.
And so you generally would steer away from, I’m sorry, it’s reinforcing the stereotype, but it seems from that data to be a genuine, I’m just reporting on the data. But there’s no moral implications for that because we don’t know which culture gets it right, and that may not even be a sensible question. The good news about cars is I don’t think we will, at least in the West, we’re never going to feel entitled to make that decision if only for political reasons. You can just imagine the uproar of the federal government says, “And they’re going to knock off the old people first.” Or “It’s going to spare nuns and rabbis if we can identify them.”
It’s just politically impossible. But there was a real case of this seven years ago? My time scale is way off 73, nothing, it’s all one moment for me at this point. In which one of the major, I think German car manufacturers, European anyway, one of the project managers at a conference let slip that their cars would always protect the driver, the passengers first. They will always… And that got repudiated by more senior bosses because it sounds terrible, but nobody’s going to buy a car… Most people are going to buy a car that says that’s what they’re going to do. You’re not going to die to save a nun or whoever, just make up your example. It will always make you person number one. It’s hard to get around marketing that way, I’m afraid, at least in the US. Nobody wants to the question because it’s not a possible one. I don’t think we’re going to ever be able to face that question.
JANE
It’d be interesting to ask a bunch of children what they think. Now you said you had a third one, David?
Interpreting AI images and smiles
Yes. I just came across this, it was a year ago. Somebody on Reddit asked Midjourney, which is one of the image generators. The prompt came down to this. “Compose a selfie of Spanish conquistadors in the 17th century. You don’t see the camera and it should look like an iPhone, et cetera.” And it did. It is really sort of great. You got a bunch of conquistadors and they’re uniforms and they’re well, big, big smile. And you do the same… This person asked for different time periods and different cultures. And everyone, great picture, great selfie, just astounding. So good, and everybody has a big, big smile looking up at it, including the Aztec warriors who were being ferociously and horrendously killed, dying to try to protect their culture, the genocidal attack on the Aztecs. But they got a big old, as an article about it said a big old American smile. Actually somebody on the Reddit thread because the Redditors were, “Oh, this is so wonderful, it shows that we’re all the same.”
And about 25 comments in somebody says, “No. It shows that AI thinks that we’re all Americans.” Because it is a big American smile. So there’s some really good writing about this. This is a very subtle form of bias. One of them is of Native American chiefs or leaders. And they also have the big smile, but I’m trying to remember the name of the… Oh, Janka J-A-N-K-A. They go by their first name. Wrote a really great article about this from which I’m now drawing. Smiles are very culturally and historically different and determined. Even today, even right now in Russia, if you catch the eye of a stranger in America, we tend to smile, maybe not on a New York City subway, but in general we will smile and in Russia that’s taken, that would be weird. It would be a totally suspicious, sus to do that.
It’s just not the right way to greet somebody for whatever reason. So this is so full of American stereotypes, which is almost inevitable given what it’s been trained on. So I went to ChatGPT, asked that to do the same thing, but I also asked that, “Can you tell me about the history, the differences in the meaning of smiles across histories and cultures?” And it gave me a great little mini sort of Wikipedia-style article. The way that does that. Six examples, one of which was in fact the Aztecs, and I think it sounded right to me, maybe it’s hallucinating wildly, but nevertheless, it did point out ways in which smiles mean different things. So it does know that smiles are historically, culturally determined, but when you ask it for a selfie, it doesn’t have a body of knowledge. It doesn’t know that. Nowhere in it is that piece of knowledge codified and connected with other things.
This is a language model. It just knows how we’ve used words, the likelihood of one word following another based upon the gazillions of pages it’s analyzed. So it’s utterly, utterly foreign. It acts like it has knowledge, but it doesn’t have any knowledge, none at all. This does raise ethical questions. So what would you like it to…? So you’re in charge of, let’s say, of ChatGPT and somebody points this embarrassing thing out to you. When somebody asks for a selfie, that’s what they say, “I want a selfie of…” Do you apply your cultural knowledge and disappoint them because not everybody’s smiling? Do you make it accurate when people probably, if you just ask for a selfie, it’s going to look at it what it knows about selfies, which is pixels that make big smiles. Or do you want it to give the person what they probably want, which is everybody happy smiling a selfie?
You don’t frown in it. It’s not a selfie if you’re not smiling. I don’t know what the answer to that question is. I don’t know that there may be an unanswerable question, but it has ethical implications because you end up with 1860s Native American Indian chiefs gathering and smiling insanely wide smiles. And we have historical evidence as Janka points out that no, that’s, they didn’t smile for photos. So I feel like really quickly why I like this example. I sort of know how to deal kind of with the others, I sort of know what the parameters are, and you have to make a decision about it and you can make a decision about the mortgage loan, et cetera. I don’t know what the answer to this is. I’m also interested in the fact that I take a pretty clear example of these things do not know anything. It’s not knowledge in any way that we think of it.
How will AI influence the future of education?
Right. Speaking of knowledge, David, brings me to a topic I want to get into with you, and that is education. And I’ve done a lot of research and I’ve done some writing that I haven’t published yet on the fact that I think the whole education system is over its head right now. It’s actually doing more harm than good. And it’s not just in America. I did a survey, I called the Future 2043, where I had 15 questions with one blank response for each question, you rate something on a scale and then you put in a comment if you want to. And I got 200 people around the world in maybe 20 different countries. So it’s by no means a statistically valid sample. I mean, 200 people around the world is nothing. But it’s a wide variety of people in many different countries. And there were clearly over a thousand, maybe 1,005, people who put in comments to my questions.
I published a whole report online about the results of it. One thing that they were really clear about was my question about education in general. “Are we going to, in the next 20 years, if we’re going to have the same models of education that we have today, or will they be radically different?” And the comments I got from many different countries was, education has to change. The models we have today are not working. If it doesn’t change, it’s going to be, someone talked about it being like a dinosaur. And I had people from India, from Indonesia, from so many countries who agreed that education is not happening the way it should today. What’s your view on that?
DAVID
It’s going to take a generation to figure out what to do. I think you very likely agree this is not a question where there’s an answer and we can start implementing it and getting it right. We were, I think at sea at the moment, lost in trying to figure out what to do about AI, particularly chat AI because it’s so accessible. It’s designed to be accessible, it’s designed to make us think it’s a human. A lot of money was spent as a design goal. We don’t know… It’s challenging everything. So I don’t know what to do. It’s challenging the value of writing reports as a metric or even something you should know how to do. I personally think that very few people in the course of their life write reports after they leave school. In one way, it’s a weird thing to spend so much time training people on, but it’s also a way of training them how to read reports, how to read and evaluate and think about and extend, how to extend their own thinking, all of which are really valuable.
I’m not sure that reports, that writing in that form is the right way of doing it, is the best way of doing it. Knowledge has become much more conversational, which has great drawbacks. Because we lose some of the certainty that having a validated agreed upon body of knowledge gives us. We know what to refer to and we know we’re safe if we take this, even if it turns out to be wrong, well, it wasn’t me who made a mistake, it was part of the culture or the person who wrote it, which is why references are so important and so badly missed from the chat AI stuff. Danah Boyd is a really important and wonderful person, but she’s a very important person in organizing social research, and I’m a big admirer of her. So maybe five years ago, something like that, she wrote an article, which I take as it’s not enough to teach students critical thinking skills or in philosophy, we would call it informal logic, where you find the flaws of what somebody’s saying, the logical flaws one way or another.
In fact, there’s a danger to doing that because you are teaching students how not to believe things, which is good, but it’s actually more important at this point to teach them how to believe things, how to come to belief. So I’m not going to say she’s saying that’s more important. I’m saying that’s more important. She may agree or not. Especially since, from my point of view being old, I grew up when there were three networks on TV and they each had six o’clock news reports, half an hour, 22 minutes total with three white guys. And if you watch that, you had done your civic duty and it was reliable in some sense. It was taken as being a reliable stream of news. It was highly filtered by the privilege of the people who are putting it together. But it was generally not wildly wrong.
And so I was teaching informal logic at that point in early 1980s where, yeah, I mean, so here we have this stream of information. It sometimes goes wrong, and if you can spot the times when it’s going wrong by fallacious thinking and the like, then you’ve cleaned it up, you’ve taken the pollutants out and you can drink from the stream because it’s reliable, it’s fundamentally reliable. And we don’t have that fundamentally reliable, we all agree that this is the news, never actually did, but we had at least that belief. Now we don’t have that. What we have is we have so many sources of news and lies of course, that just spotting the errors isn’t enough. It doesn’t leave you with beliefs. You take out the what, 35,000 lies that Trump told when he was president, whatever, some number like that. It doesn’t mean that you’re now left with a credible source. You now know what to believe. You’re just thrown out into the world where there’s so many competing voices.
Coming to belief in conversations in networks
And so it’s more important than ever from my point of view, that we learn how to come to belief. And it also seems, again to me, and I think obviously, but maybe obviously wrong, I don’t know, that we’re going to be doing this collaboratively. That’s what we’re doing now. And we have different opinions about what a good collaborative source is. Is it Facebook? Is it Twitter/ X? Please say no. But maybe it’s that, maybe it’s Reddit, maybe it was how Reddit was five years ago, which is actually my point of view and so forth. Is it having a network and a mailing list or a group chat, or is it all these different ways of collaborating, trying out ideas, collaborating and learning some better than others, but I don’t think at this point we’re instructing children how they are going to be coming to belief once they leave school.
They’re probably not going to read books. They’re almost certainly not going to be reading a newspaper. I don’t think we’re preparing them for the sort of the new opportunity, which is conversational, very dangerous medium. Because lots of conversations are very bad. So it’s not something obvious. If you’re staring, if it’s your first encounter with a screen in your village in India, that does not by itself prepare you for understanding or participating in the global conversations that are trying to make sense of this wave of information. That’s what I would like our schools to be preparing our students for. Because where lifelong learning more available than ever and possibly, I have no evidence, no data, possibly happening more than ever. That’s where that’s going to happen. It’s going to happen in the conversations, the weird conversational formats, the “Normal” ones that we live in on the internet or whatever follows after the internet.
JANE
One idea that has come back to me often through different pieces I have found on the internet during my research is teaching children to solve problems. Not to memorize facts, but give them a problem to solve or let them choose a problem or define a problem. But it’s based on problem solving, working in small groups together. And that sort of originally came from Sugata Mitra, his S-O-L-E, Self-Organized Learning Environment for which he got a million dollar TED Prize to implement just sort of by the way. But the idea being that children, and of course this can go way beyond children’s school, putting students in situations where they need to think about how they can find an answer or how they can construct an idea, preferably with other people. I think that can be, there’s some models of schools for children that I’ve actually located and have included in my article where that’s actually the way they do it. Of course, now this is a tiny, tiny, tiny minority.
DAVID
That seems amazing.
JANE
And one of them publishes a bunch of ideas on the internet about exercises that they don’t call them teachers, I think they call them leaders or facilitators or a word. They’re trying to get away from the traditional idea of a teacher telling you what you need to know, but people helping you move towards discovering what you want to discover. And I find that really interesting. Certainly my school was not like that at all.
DAVID
Yeah, I was writing reports.
JANE
Yeah, exactly.
DAVID
Some of which I still remember. Report on Egypt in the sixth grade or something. I covered the entire nation in three double-spaced handwritten…
Jane’s limited yet enabling school system from 50 years ago
Well, that sounds better than my school, David. My school was where I lived. I went to a country school that had three rooms and each room had three grades, and you moved from one side, to the middle, to the other side as you went through the three grades. And you had the same teacher for all subjects, therefore for three years of your school life. When I graduated from that, that was from eighth grade, I graduated and I went to a local high school. This was in Council Bluffs, Iowa. So it’s sort of a rural part of the world, especially at that time. I couldn’t believe how open and free this school was because of the way we had just been put into rows and taught what we had to learn. And I must say, I can’t criticize the teachers. Can you imagine having the same teacher teaching every subject at three different grade levels? I mean, what can the teacher do?
DAVID
Yeah, absolutely. Were there any advantages to that form of learning you can think of?
JANE
There was form that I had, you mean?
DAVID
Yeah, three classes?
JANE
I don’t know. There was probably a negative impact in that there was a lot of competition because not only were there three rows, your position in the row depended on your test results. And I was always trying to get to the front of the row. I mean, isn’t that crazy? So the only advantage I can see is that we as students had to do a lot of things that today they would never in a million years let students do. There was no internal pummeling. There was an outhouse for the boys and a outhouse for the girls. And so the big girls had to take the little girls and the big boys took the little… So we had a sense of taking care of each other. And that was very interesting, taking care of each other.
Another thing we did is, if we were selected, and I was one of the ones selected, but a lot were, we then directed traffic on the road that passed just by the school. We had a white belt and we would stand and there were signs warning cars that there’s a school crossing and all that. But we were there with our identifiers and we had to stop traffic and walk the school children across. And today, they’d never give that responsibility to a child. And so from that viewpoint, there was a sense of taking responsibility that I don’t think school kids have today.
The pride of sharing
Can I say something positive about the internet and teaching in this regard?
JANE
Yes.
DAVID
To some extent, I think it has become fairly common for people on the internet, including kids, to want to share what they’ve learned and to teach others. And there’s lots of motives for this. I mean, it’s internet fame and all that sort of counting your numbers, [inaudible 00:49:42 ]
JANE
The whole influencer thing?
DAVID
And not just influencers almost anything. So if you play video games and you can’t figure out how to get out of the spot, Google it and there’s somebody who’s put up a video of how to get out of that spot. And more seriously, if you’re trying to learn how to make a birchbark canoe, I haven’t tried this, but I’m confident there are bunches of people who, because they know how to do something for one reason or another, maybe it’s just for likes, maybe it’s because of a different idea about sharing knowledge, they will go to the trouble of posting a video about it. We know that there’s every topic just about has something about it on the internet. And those things got there, I think by a sense of knowing things and not teaching is selfish.
JANE
I think you’re right, David. I hadn’t thought about it that way.
DAVID
I would like to believe that.
JANE
And I’ve sensed it. I’ve watched a bunch of gardening videos. My husband’s watched a bunch of things about tasks around the house that he needs to correct and so on. And often the person there is excited to be sharing. They are really, it’s what you’re saying, it’s the fact that they’ve learned something and they want other people to avoid the mistakes they made at the beginning. That’s really strong in gardening and growing things.
DAVID
I mean, it’s strong in a lot of areas from not tying to cooking.
JANE
Yeah. David, where are you going now? What journey are you on after everything you’ve done over the last, what? 40 years? I’ve been aware of you only since 2001. I’m curious to know how you see your movement, I would say towards the future?
David Weinberger’s new book and journey
I’ve been working for a few years on a proposal for a book, which seems like a very long time to me because I’m a little obsessive about it and I’m not sure why. And the book keeps changing shape the way that they, as we talked about, titles change because what the book is about changes, even though it’s really sort of the same book. But I think I’m getting closer and the reason why the proposal takes so long to write is that I pretty much have to map out in some detail the chapters. And for me, not just what they are about, what the sort of argument or how I’m going to talk about it. And I think I’m pretty close. I hope I’m very close to having a proposal I can try to shop around.
The premise of it is, I fully believe the premise. The premise is that we’ve tended for whatever reason, to understand ourselves and our world in terms of the dominant technology of the time. You can take this back at least to the 1600s when watches were the incredible mind-blowing technology of the time, and they actually still are, hand made timepieces. And we started thinking about the world, a universe, as a clockwork. We even called it that where it had the mechanical characteristics of a clock. And then you can look at various ages, steam engines has a huge impact on how we think about our psychology. And among other things, computers certainly where everything became information and information and processing through set and logical procedures. Internet, everything began to look like a crazy wild web or network. But now we’re in the age of AI. So I take that as that’s a solid premise and if somebody wants to argue, that’s fine.
I think it’s true enough. So the more or less outlandish question the book asks, and I recognize that it is, is “Well, is it too early for us to think about how AI machine learning in particular will change how we think about ourselves in our world?” And the right answer to that question is yes, it is too early. We need to wait to see what will happen. But I don’t feel like waiting and I’m willing to be… It’s a speculative book and its position is speculative. And so it asks what concepts like creativity look like in light of AI or free will, which is the odd one or knowledge is a more solid one or causality, which is again, pretty solid and all the way up to what we think a thing is, an object is, which I’d been writing about for a long time, but then simulation theory came along.
The idea that we might be living in a simulation start being taken seriously, not so much by me, but it’s really interesting because in the simulated world, everything works exactly as it does in the real world, except there’s no matter, it’s all digital. So we think things have substance and matter, but in the digital simulation, no, it’s just bits which has a very direct effect on how we think about things. If we were to start believing that we live in a simulation, either it’s not going to make any difference to us because in the simulation, it’s exactly the same as where we currently are, or we may start to think that underneath the appearance of matter, the solidity of this cup, nah, it’s just programming, it’s just bits. So it goes all the way, the book goes all the way through what we think things are. Which actually takes me back to my doctoral dissertation. I realized about two years into this that, “Oh, that’s what my doctoral dissertation was on.” I’m still thinking about that, apparently. It was about the nature of things. Yeah.
Unanticipation, the strategy of strategies
When you’re speaking, it makes me think a little bit about the word unanticipation-
DAVID
Oh, geez.
JANE
… that you talk about a lot. Anticipation is so limiting and it’s thanks to unanticipation that we have opening all these possibilities that are in front of us.
DAVID
Can I say a word about that?
JANE
Yeah, please do.
DAVID
I talk about that in the last book I wrote, Everyday Chaos.
JANE
Yes.
DAVID
About half of which is about the internet, how the internet is changing, how we think about some of these things. And the idea is that since Paleolithic times, we have our strategy, basically our strategy of strategies has been to anticipate the future and to prepare for it. So if the weather is warming up and you’re a cave person, you’re going to start maybe anticipating the birds returning or whatever and start making arrows. And if you’re wrong about that, weather turns and the saber-toothed tigers return and you’ve been making arrows when you should have been making spears, there’s a very high price to that, but there’s nothing else we can do. And so we’ve anticipate and we prepare. It’s a pretty core part. I’m not suggesting we’re going to stop doing that because you anticipate crossing the street by looking for the bus on either side and you should continue doing that.
But there’s a way of taking the core thing that the internet has done, and given that everything is miscellaneous, there’s more than one core thing. But one type of core thing is that the internet has given us the opportunity and more or less trained us to try to avoid anticipating as much as we can. So on the one hand, this is things like minimum viable products, which are launching a digital product with as few features as possible so that you don’t have to decide what they want, the users want. Give them one thing they want, and then see what else they need. Very successful strategy. But also agile programming, which can change as the circumstances change. Game mods where users can add their own versions of the game or tweak the game in ways that nobody could anticipate. And if the company did, they couldn’t meet. The app store on the iPhone, which is really the thing that sent it to the stratosphere, is the model of a phone.
Is Apple saying, “We cannot possibly provide everything that this thing this phone can do, nor could we anticipate all the needs that people have from serious to frivolous. So let’s create an app store, we’ll put in some guardrails.” And they have like 2 million apps now. APIs, which extend the value of piece of software by allowing other people on the web freely to modify it. It’s add to it or integrate it. Open access is huge. “I’ll put this, I don’t know what people are going to do with this code. I know why I wrote it, but somebody else might need it for something.” And in fact, the internet itself was designed explicitly as a 1982 paper, the end-to-end architecture in networking, something like that. Which says, “Should we put security systems into the heart of the internet as a standard thing?”
And this researchers say, “No, because then we’ll be stuck with it forever and it won’t meet all needs. Instead, we should build as little into the internet as possible and make it possible for the internet to be used as broadly as possible for everything from providing security to virtual reality.” And that is a core principle of the internet. It is what network neutrality, which was just reinstated a few days from when you and I are having this conversation in the US. Is what net neutrality is about as well, that deliverers of the internet bits should not be able to dictate what those bits would best be used for.
So unanticipation is a really important, I think, strategy of strategies that is, ultimately, it’s about making more things possible. Rather than trying to limit the future to the one that you want, instead you try to broaden it as much as possible by making what you’ve done, including, is another example of the education thing by way. By making what you’ve done, what you’ve learned as widely as available to people without knowing, being able to predict, anticipate what they’re going to do with it.
Final advice from David: open up
Could we call that a final piece of advice that you are giving listeners to this podcast?
DAVID
I will sum it up in three words.
JANE
Please do.
DAVID
It looks nice in print with periods between them. The imperative is to, Make. More. Future. Which goes back to the role of elderly and adult people and choosing what the future is going to be for our children. And since we cannot anticipate what they are going to want or what the world will be, I think the best thing to do is to open up as many possibilities as we can.
JANE
David, thank you very much. I’ve really enjoyed this conversation.
DAVID
Me too. I thank you so much for talking with me.
We talk with forward thinkers, scifi visionaries and pioneering organizations about people and society, AI and humans, the earth and survival. Read more Imaginize.World
Subscribe on your favorite Podcast app