Unanticipation with David Weinberger
Founding values of the internet under threat
Imaginize World is about how people imagine the future in order to build it. And I’m orienting the direction more and more towards thinking about youth, about the future generation. How can they be inspired? What should they be thinking about? What should they be doing? What can they do to create the kind of future that they want to have?
One of the most crucial questions. For a long time, I’ve been a member of a network, started out as a little conference, but it’s a network where the founding question, which people, lots of things come up, but they stick to it pretty well is, what is the internet that we want to leave our children? So this was in 2000, AI was not yet the thing, and the internet seemed to be tremendously important, as I think it is. Its founding values were under threat even in 2000. That’s a subset of your question, but it’s the same sort of conversation. I think they’re really, really valuable conversations. I think they’re essential conversations.
Mental models about tech
Let’s move to 2019 and Everyday Chaos. This is a book I think that could have frightened a number of people that your subtitle is Technology, Complexity and How we’re Thriving in a New World of Possibility. It’s a very, very upbeat title and you move into chaos and you talk a lot about machine learning, deep learning, and I think that’s what maybe puts people off, sort of related to AI, ChatGPT and all that, sort of people are taking for granted now, but I don’t think most people really stop and think about machine learning and what it means. And you talk about that quite a lot.
Yeah, that’s what I’ve been writing about for seven or eight years because I find it fascinating. Most, if not almost all of what I’ve written the past 40 years has been about how tech affects how we think about things, the sort of mental models that we have. Not a phrase I’m crazy about, but it’s actually, I think probably the right phrase. The subtitle… I’m sorry, I’m going to go back to subtitles. The subtitle of Cluetrain was The End of Business as Usual. And in lots of really important ways, business as usual did not end. In lots of important ways, it did. And I think we’ve gotten so used to them, we don’t always notice them, but that subtitle would need a lot of work now. If a book came out with that as a title, it would pretty clearly, and we could see ahead 25 years. Nope, that’s not what happened at all. Big companies got bigger, et cetera. Lots of things did change.
My point is, that’s the subtitle I maybe regret the most, although it was very effective for the book and we believed it at the time. But that’s because it was a prediction about what will happen. And I have no power to predict what’s going to happen. I’ve never thought that I did, I don’t. I don’t know who does, but maybe nobody, but it’s certainly not me. I’m way more interested in how living on the internet as we were starting to do 1999, 2000, changes how we think about how things go together.
Minds tuned for survival, not for truth
You say that we have long believed that the universe is knowable and that because we think it’s knowable, we used to think or many people still think it’s knowable, therefore it’s pliable to our will, you said. And I love this quote where you say, “Evolution has given us minds tuned for survival and only incidentally for truth.” Do you remember writing that?
No, of course not. I do remember believing it. I mean this is a very serious conversation, right now, in fact, in the form of the universe is tuned for life, is the claim that’s being made by scientists, which is a really big claim. And this is part of the argument of believing that… Or can be used to support the argument that we are all living in a simulation. Because a simulation, absolutely, if we are, is tuned for life. But the idea that the entire universe on purpose, and usually the people who are saying this, I think, are saying not on purpose, happens to be the one in a gazillion universes in which life would emerge, and conscious life, is just so wildly improbable. And so right now, this is one of controversial notions that is floating around.
Fairness in AI models
One of the things about machine learning that you talk about is the idea that we as citizens, people of the earth, have the right and should know how it’s working so we can therefore, I guess, validate it or agree with it. And you talk about the idea that what’s important is knowing the data that goes in and the understanding of how it works is something beyond us and is not necessary to know. We don’t demand that of other systems. So how can we accept and I would say, celebrate machine learning, ensuring that it’s still, I don’t know how to say it, you talk about our human truths, our values, that it respects fairness, for example. That’s something you talk about a lot. How can we take control of that or how can we influence that? I don’t know even how to ask you the question.
Yeah, those are good ways of asking the question. I mean, we know that AI models are completely capable of being wildly unfair and sometimes obviously, which is at least better than not obvious. But sometimes obviously, but sometimes not, we don’t even know it. That’s really, really dangerous. And I think we keep discovering layers of levels in which it can be unfair in one way or another. One of the good things this has done, I hate to point out the positive side of AI unfairness, but I will, ’cause I think it changes how we think about fairness. Because if you are involved in trying to fix an unfair model, AI model, it’s quite likely that you are going to have to face questions that humans have a great deal of difficulty with. And you will discover, as researchers have as well as practitioners that, oh geez, there’s maybe a dozen different types of fairness, which is very important to know. But then you have to decide, well, which type is fair? Which type do we want in this case? And we don’t have good ways of deciding that.
Coming to belief in conversations
And so it’s more important than ever, from my point of view, that we learn how to come to belief. And it also seems, again, to me, and I think, obviously, but maybe obviously wrong, I don’t know, that we’re going to be doing this collaboratively. That’s what we’re doing now, and we have different opinions about what a good collaborative source is. Is it Facebook? Is it Twitter slash X? Please say no. But maybe it’s that, maybe it’s Reddit. Maybe it was how Reddit was five years ago, which is actually my point of view and so forth. Is it having a network and a mailing list or a group chat, or is it all these different ways of collaborating, trying out ideas, collaborating and learning, some better than others. But I don’t think at this point we’re instructing children how they are going to be coming to belief once they leave school. They’re probably not going to read books. They’re almost certainly not going to be reading a newspaper. I don’t think we’re preparing them for the new opportunity, which is conversational. A very dangerous medium because lots of conversations are very bad.
AI machine learning changes how we think of ourselves
So the more or less outlandish question the book asks, and I recognize that it is, is well, is it too early for us to think about how AI, machine learning in particular, will change how we think about ourselves and our world? And the right answer to that question is, yes, it is too early. We need to wait to see what will happen, but I don’t feel like waiting and I’m willing to be, it’s a speculative book and its position is speculative. And so it asks what concepts like creativity look like in light of AI or free will, which is an odd one, or knowledge, which is a more solid one, or causality, which is again, pretty solid. And all the way up to what we think a thing is, an object is, which I’d been writing about for a long time, but then simulation theory came along.
Unanticipation, the “open up” strategy
Unanticipation is a really important, I think, strategy of strategies that is ultimately, it’s about making more things possible, rather than trying to limit the future to the one that you want. Instead, you try to broaden it as much as possible by making what you’ve done, including, another example is the education thing, by the way, by making what you’ve done, what you’ve learned as widely as available to people without knowing, being able to predict, anticipate what they’re going to do with it.
For more from this interview, subscribe to Imaginize World on YouTube or wherever you listen to your podcasts.