Setting the scene
Rawn Shah is a true AI explorer. From years in the corporate world working on collaboration, he is now in the AI world, specifically responsible AI, what it means, why it’s important, and how we can shape it.
I’d like to ask you, Rawn, a question, and that is, if I were to say I met this guy, Rawn Shaw, and he’s a… How would you like me to finish the sentence?
RAWN
I would say a troublemaker. I’m always there looking at projects that are usually interesting to me. Often, it’s something that it’s just barely on the edge of technology. I’ve done that in over my decades at Java world. Java was a brand new thing at that time, then cloud computing with network computing world, then it was enterprise collaboration from our days there. By the old time, it was just emerging. Now, when it comes to the current topic, I’m not quite there. Responsible AI has already been there for several years, but I find it newly fascinating and I really [inaudible] to jump in there.
What is responsible AI?
I would like to know how you would describe responsible AI. I’m struck by the fact that the word responsible is a word we tend to use for people, not for things. I think that already brings up a question about what AI is. What is responsible AI?
RAWN
Well, it is actually thinking like people. A loose description is it’s to help the AI system work with the values and the ethical principles of human societies. That’s not one thing, because we have many different societies, we’re many different peoples, we all have different values. But at the same time, this is something AI naturally doesn’t understand. It’s a statistical machine. It is purely math. Now, it’s a math of probabilities that’s producing this output, but that doesn’t include any understanding of human values, so you actually have to program that into there.
One of the interesting things like phrases I came across is stochastic parrots. Stochastic means it’s semi-random. I mean, there’s a process, but you never know what to expect when it comes out. Parrot means it’s just repeating what it’s heard. So it’s randomly parroting things, except it’s really good at putting together sentences that sound good. Now, it really depends on what you train it on. If you have it sitting, watching YouTube or TV all day long, you don’t know what it’s been watching and what it’ll spot out.
Why are people afraid of AI?
That’s very interesting. I think people in general hear the word AI and they’re afraid of it. I mean, the average person, not people who are working perhaps with it in their lives or organizations. But in general, people have a bad feeling, nervous feeling about AI. Is that the impression you have?
RAWN
I think one of the realities we face is that the basic statistical machine of AI has already had a lot of responsible AI elements into it. Now, we talk about this as if it’s a done deal. No, it’s an ongoing, constantly adapting thing that we are discovering along the way. We are trying to make it more trustful, we are trying to make it more accountable for what it says like, “Where did you hear that?” Have it respond like, “Okay, here are the references.” “Give me a rationale of why you said that.” “Well, here is my train of thought.” Those are actual elements that you’ll find in some of the AI systems out there. In terms of is it safe, well, it’s only safe or responsible because we are trying to actively make it so. I say we in the collective sense in every company, in every organization, in governments, but that also puts it at the mercy of who that we is.
JANE
Exactly. I was just going to ask you about that.
RAWN
Right?
The new AIs are shining, bright lighthouses. They are mirrors.
I think that’s one of the big question marks, isn’t it?
RAWN
Yeah, it’s one of the what are their intentions? When we speak about AI, I’m generally talking about the more popular ones out there. The brand name that has been… You can easily bring some of these like Mistral, ChatGPT, Claude, any of these that are out there, DeepSeek, some of the different ones across the world. But those large LLMs are basically the shiny… They’re so shiny, they’re lighthouses, they’re blindingly bright. The way AI is actually being used everywhere else is actually off of those. They’re the mirrors. They’re the ones that capture that light from those systems and then reflect it through their business processes, their data information, and does activities, whatever that purpose is.
While you’re looking at the bright light and perhaps you’re out there interacting with ChatGPT and any of those other systems, when AI becomes commonplace in the workplace, it will be all those other systems. That’s what we really have to be aware of. It’s incorporating the language capabilities, the reasoning capabilities with our information and our processes and wherever we are.
Do humans and AI collaborate? Individuals or groups?
JANE
Does that come down to the individual? Or is that something that can be done as a group or as a company?
RAWN
I think most of the work of actually developing AI-based processes are teams and organizations. It’s rare that individuals produce them. When it comes to responsibility and responsible AI techniques, let’s talk about some of these things. The topics that we are really facing are things like accountability, intellectual property, diversity, inclusion, pluralism, transparency, sustainability, truthfulness, privacy, human rights, children rights. These are all things that you could think about it from your perspective, but a lot of those also have to be done at some organizational level, some team level to make sure that when you are trying to build an AI system…
Let’s say it is a meeting conversation recorder and transcriber. You try to include some of these things in terms of what are the rights of using this information, recording and transcribing the information, and who said it? If you have a transcriber that doesn’t say who said anything, you’re kind of lacking accountability. You’re not saying who actually made that statement. Obviously, it’s a functional level you’d want. AI only knows the context you give it. The LLMs usually are these large ones are based for anyone. Doesn’t mean it automatically understands your context.
How does AI understand context?
“Oh, that’s something I’d like to do which is fun for me.” Well, it doesn’t know what you find fun. You may not even know where you are. Planning an event for local things and trying to book a location for a group of friends and such, you have to provide all that context to the AI, but you’d also have to provide context like, “Oh, make sure that we understand what are the terms of where we are booking this event.” You can’t blame the AI if you don’t look up the terms of the venue that you actually booked.
Now, on the other hand, the organization side of things, when they’re building this, they need to work on each one of these rules as well. The statistical machine, the model that is the AI is an inference engine, but when we actually talk about these systems, it is the inference engine plus other sorts of rules. These are the governance rules, these are the management. How do you retain information? How do you utilize visual information, auditory information, textual information? Those are the rules. That’s the responsibility of the organization that provides the service and you have to think about that. It is not an entity, it’s an organization providing a service to you.
Human + AI Explorers – what and why?
I’m going to move up a little bit to this community that you’re involved in. In fact, you’re one of the leaders called the Humans + AI Explorers. First of all, the name is intriguing. Explorer is such a good word because it’s not expert, it’s not advisor, it’s not consultant. We’re all exploring together humans and AI. It’s [foreign language] name, as we would say in French, a genius name. I read about the goals of the group and it seems so incredible and ambitious. It sounds wonderful. Can you tell me a little bit about the group or the community, I think you prefer to call it?
RAWN
It’s exactly as you described it. We’re explorers. This is a global community and I came across it when I was chatting with an old acquaintance, Ross Dawson. Ross is based out of Sydney. He is a thought leader speaker in events all around the world all the time. This was particularly interesting because of the approach of thinking of humans and the AI collaborating together. I should say not the AI, but plural in that sense. I’m not sure what the plural of AI is. Ais?
JANE
We don’t say AIs, do we?
RAWN
We don’t.
JANE
It’s like this entity that’s broader than any one thing. That’s the funny thing about the word, actually.
RAWN
Yeah, it’s like moose. Is it one moose or is it multiple moose? AI is moose. It’s really about the collaboration part. In terms of how do we interact with it, not just as an individual but as it could be any number of any size of things. It could be a team of people, it could be a whole company, it could be multiple organizations, it could be a whole society, and understanding the models of how we interact.
What caught my eye was that understanding the models, because if you remember a decade ago or more, I did a book on enterprise collaboration models. That was really about how do people work with each other. What are the ways that people work with each other in using online systems? Now, if you swap out some of those people with the moose, then you get that kind of collaboration that’s going on. Now, the rules are different and the way you engage with it is different. Is it an equal partner? Is it a tool? Is it a member of your team or different? That’s the dynamics that we are trying to explore.
The spaces are a new thing. It’s only been about a few weeks since we launched each of these topic spaces. My topic space is responsible AI. Jack’s, her space is whole systems, meaning that the entire ecosystem of what is involved in using or collaborating with AI. Everything from where do the raw minerals come to sustainability to how does it impact society and such. There are other ones on foresight. How do you develop a futures thinking in collaboration with an AI? In each one of these roles, each one of these spaces, we are thinking of a different role that the human and the AI fit together, different dynamic that’s going on there.
JANE
Yeah, that’s really interesting. What output will be coming from the different roles or the different…
RAWN
Spaces.
JANE
Okay, so it’s not necessarily a group of people. A person can belong to several spaces?
RAWN
It’s the spaces that you’d want to join in. Just to list out the spaces, AI and the enterprise led by Mary Daly, AI-augmented strategies by Ross Dawson. It’s how do you define business strategy, organizational strategy using AI? AI-augmented foresight by Dennis Drarger, then there is the whole systems that I mentioned… That’s on me if I’ve mistaken your name, Jax. Jax NiCarthaigh. That’s not what you might think. And then there’s myself with responsible AI.
Human agency: Am I creative or is it AI?
These are different elements that are in there, and my space in particular is about these other human and ethical elements of the AI itself and there’s a lot. I spouted out a number of different topics and there’s no clear answer. My personal interest inside this entire domain is human agency. Let’s say you’re an artist and you work with an AI to create a piece of art. Everyone professes themselves to be the brand new artist that’s come up with this genius way of doing something with AI. The question there is who created it? Is it yourself? Is it the AI? Is it a combination? What is the ability to define? What was your agency in the work now?
Now, that’s simple if you just think of one person and one AI. Now if you expand that to whole teams of people and AI, then how do you understand who actually contributed to the output of the work? Trying to understand the human agency is one thing. Now, what about the AI? Does the AI have an agency? Does the AI have a desire to be as recognized as an equal contributor to the work itself?
JANE
And that comes down to property rights, doesn’t it? Who owns it?
RAWN
Possibly, that’s one interpretation of it. And if you take out the property question, it’s still a question. You want to understand, “Am I the one who’s being creative?” I think for organizations, this is going to be a real challenge in the future we haven’t arrived at yet. As we include more and more AI into whatever processes that we have, we’re going to start asking the question, “Well, how human is your organization? How much of that work is actually being done by people as opposed to the automated systems or the AI systems or everything else, which are just not human?”
How human is your organization from a social and global perspective?
And when you start looking at it from a societal perspective and global perspective and if you consider the issues that whole countries are facing, which is the labor force and then shifting labor force and aging demographics around the world, it’s happening in the US, it’s happening in Japan. Really, most of the developed nations. How do you keep people occupied, employed. Bored people are probably going to be unhappy people, so it’s not simply a matter of are there people working on projects, but are we actually engaging in creativity?
Going back to that stochastic parrots, it’s a real question whether creativity is happening or it just happens to come across some random association of ideas that we find, “Oh, that is interesting. I wouldn’t have thought of that before.” But is that really art or is that pastiche?
JANE
But don’t these things, I guess, that make up the LLMs, the pieces that have gone into it are from humans?
RAWN
I would say originally, yes. Now, here’s the interesting part. The amount of information is growing and a lot more information is being generated through AI.
JANE
Oh, I see what you mean. Yeah.
RAWN
At some point, there’s going to be more AI-generated information than there is going to be human-generated information. It’s feeding on itself.
JANE
It’s interesting. I can imagine a diagram would be like circles with arrows going both ways where the humans are feeding AIs and the AIs are then feeding humans and AIs feeding AIs. It’s sort of a spiral that… Where would it end?
RAWN
Yeah. I’m not sure I can predict where that’ll end, but if you think about it, individual people are not really worried about that. Basically, it’s a common problem across society but not any single person’s. What do we do about that? I know I’m focusing very much about human agency in the world when it comes to AI, but I think it’s important to try and understand just how much are people contributing. We are currently looking at the lens of how is AI being useful? How is AI creating value for our organizations? Well, how are humans creating value for your organization? It’s a classic management problem. We still haven’t answered that.
Transparency, truthfulness, and trust in AI systems
And I think, Rawn, another problem is the question of trust. Can you trust what AI is providing? You’re probably going to say to me, “You have the same problem with humans. Can you trust what humans are providing, say, to an organization?” I think trust has become something really, really important today, maybe more than in the past.
RAWN
I would agree with you and I think there are different aspects to it. Transparency is one part of information. It’s like, “Where did you get that bit of fact that you assemble this idea?” Truthfulness is another part of it. I would argue there’s no absolute truths, but that’s an ethical perspective, but is that source of information that you provided, is that hyperbole? Is that reasonably fact like mathematical fact? Let’s put that at the base of all, physics and everything. Mathematics is this underlying theory behind all other sciences. The understanding of truthfulness, understanding of the transparency of the information, the sources that you’ve got, because we also live in a world where there are active parties that are not necessarily working in your interests or the interests of everyone.
Not only that, there’s no end of memes in the world, which is just distorting the actual what happened because it’s so much more popular. When you have a stochastic parrot that’s looking at all these memes and coming up with an answer that overpowers the original fact of what happened, which one is the actual answer? The AI doesn’t actually know. It actually is working on what is the popularity of ideas. Which ones make more sense to it as opposed to which one is the original fact that’s created this whole sprout of memes.
JANE
AI should be able to go back to get the original fact better than a human can.
RAWN
Right. And it should be able to do that because it’s essentially a piece of data somewhere. This is where we need responsible AI. Now, even the large LLMs will not necessarily tell you where it got that piece of data. Now, this is a transparency question, because what we’re talking about is the data that it was trained on. The largest one have a trillion parameters and a trillion tokens. Basically, these are every single possible word you can think of that exists and concept you can think of that exists.
JANE
It’s unimaginable for us. Unimaginable.
The Swiss AI Initiative
Right. And then beyond that, it’s like the actual data. Data might be one gigabyte file or it could be two words, a text. Trying to determine what it is, it could be thousands of pieces that led to that particular thing. But if you don’t have transparency, not only of what is the source that it used, but how did it use that? The reason that you don’t see that coming out of all the commercial LLMs is because it’s proprietary, their strategic advantage. I think I caught your eye again recently because I spoke about the Swiss AI initiative.
JANE
I was going to just ask you about that, yes.
RAWN
Yeah, I found that interesting. I probably know as much as you do on that because I’ve not been involved in it directly. But the Swiss AI Initiative is a sovereign AI initiative, meaning a national level one in Switzerland. There are others across the world. India has several, there are other ones in the EU, there are other ones in, well, Japan, China, obviously, and you’ll find them all over the place. We’re actually-
JANE
How about in the United States?
RAWN
Well, there is no clear sovereign one in the US. There are probably multiple ones being done in government, but there’s a difference between a sovereign AI that is publicly visible and accessible. Maybe not necessarily to every single individual because you have to maybe be a research organization, a research team that needs to work with that, or a corporate organization that wants to build something with that.
For example, the Swiss AI initiative is designed for that. It’s to spur startups, government organizations, researchers, to use a very responsible AI system that is transparent about where they get the data, about how that data is being used, how that data is being output, as well as they’re looking into other factors. Sustainability is also something like that. It’s like what power systems are using to run this massive parallel supercomputer that everyone’s running off of? You can’t be responsible if you are just burning through coal energy, providing answers that you could find out anywhere like like, “What’s the capital of France?”
Surprisingly, those silly simple questions come up the most as opposed to the really detailed, million-word questions. That’s the true power that you really want. If you have a deep question that will take a lot of thinking, that’s the kind of process you want in AI versus don’t just use it for a simple search system because you don’t want to run out there and use a search engine itself. That’s maybe my little soap box there, but think about it. The AI is using a lot of energy to provide you that answer and that is a cost. You may not feel it immediately, but somebody’s feeling it. Whether it’s a really expensive question or a really simple question, there is some base level of energy being spent just to answer that. So going back to the Swiss initiative.
From an AI human agency lens
I wasn’t surprised with your description of it in your post in LinkedIn. I’ve worked in Switzerland a few times with companies and I’m really struck by the Swiss mentality when it comes to making decisions, the importance of the local voice. I think it’s weekly or monthly voting systems. The idea is that the people are making decisions themselves and I find that goes very much along the lines of responsibility for what AI is doing. I haven’t given it enough thought yet. I’d have to review my notes on the Swiss way of exchanging information and making decisions, but I have a feeling that’s probably an underlying characteristic of their AI initiative. I don’t think it’s a top-down thing, is it?
RAWN
It is certainly sponsored by the government and it is based and run from the public institutions. Coalition of universities behind that, ETH and EPLC are the leading ones, but their goal is to provide not just the computing resources of the system, but also the access to work with, I believe, somewhere around 800 different researchers who are all actively looking into this. You can think of it more as a human and AI collaboration there.
My AI human agency lens will also come back there. It’s like, okay, well, the projects that you do, because they’re sponsoring grants to have compute time, essentially. This is like the classic supercomputer days. You actually have to apply for time on the system, but there is small grants for startups. They’re also large scale grants. There’s a particular set of goals that they have around what they want to develop. I would say one of the topics that they’re really interested in is how do we work with it in terms of ethics? We are using the AI and collaborating with it to help try and understand what we collectively think about ethics with the AI.
JANE
There’s a little circular there now.
A feedback loop from the long gone past
Yeah. Yeah, it’s a feedback loop in itself. I’m kind of curious because we are going through ideas that came from Locke, came from… It’s ancient. Not truly ancient, several hundred years of ethical principles and frameworks and how do they apply to these usage modern digital situations we have.
JANE
One of the priorities for them was to support intellectual property and to avoid using copyrighted material. Now, I think that’s a big question, using or not copyrighted material. That has come into question even before AI come into question. The Wayback Machine was the original name. There was the idea of whether or not people can look at information that they haven’t paid for. I think this question of owning information or owning creations is not a new question, but in the case of AI, it’s coming to the attention of a lot of people.
RAWN
Yes, and it’s something that they’re also tackling with. I mean, they meaning governments everywhere, jurisdictions everywhere, which is if you take the knowledge of the previous IP somewhere else and then you process it in a way and you produce new results, which do not regurgitate any of the original information but is something different, is that still using or infringing on the intellectual property of something else?
JANE
Yeah.
RAWN
That is some very simplistic translation of one of the problems there, but the Swiss team actually looked into this. There’s a research paper that they published, which was, “Oh, what is the effect of not using copyrighted information? What is the actual impact on the results and the outputs and the accuracy of the AI itself?”
JANE
What did they find?
RAWN
It is minuscule. The effectiveness of their model in terms of the results, the accuracy of the AI model, which is actually a mathematical calculation… The loss of accuracy is less than a percent, so it’s almost negligible by leaving out the copyrighted material.
JANE
That’s interesting.
Domain-specific information, often copyrighted
Right, but that’s true for general questions. It’s when you start getting domain-specific, you want to know specifically about a particular domain. I mean, I’m not talking about a branch of biology, but you’re looking at a very specific protein and protein molecule and how it works. Well, that requires specific access to probably copyrighted information.
There are ways where you can work around that. Meaning, that let’s say you are the owner of that copyrighted information, but you need a reasoning machine to be able to ask questions about your research. Sure, you might already know some of those answers, but when it comes to some of this research, there are multi-million page documents describing just a single molecule. Some of the patents out there for pharmaceuticals is really that complicated. Finding specific information in there is hard enough and AI is a more complicated search engine.
What they do is they take a foundational AI, and these are the famous ones you know of. They call them foundational AI. The Swiss model is a foundational AI. And then you pair it with your own database within your organization. What it does is that AI now has limited access to your information that you control. You can set the limits of what that access is and you’re using the reasoning capabilities with the known information and producing results that you can use inside your company.
JANE
And by doing that, have you made your information available to other people to use?
Why is RAG – Retrieval Augmented Generation – important?
No. This is called RAG, retrieval augmented generation. Essentially, the AI goes to some other data source to retrieve that and then incorporate it into its reasoning. It doesn’t keep that information, but it really does depend on how you build that integration between the LLM and the RAG.
There’s an interesting report that came out recently from Bloomberg. Bloomberg is the financial information and media giant in the US. I’m trying to remember his name right now, but I think it’s Dr. Seb Gehrmann’s team. He’s the head of responsible AI at Bloomberg. They found that if you don’t do it right and you don’t put in your own controls, connecting a RAG and an LLM can make it less secure.
We’re essentially devaluing the capabilities of all the energies that Google might put into its LLM, because his situation was like, “Okay, we are a financial information. What if someone internally in the company asks like, ‘How do I manipulate the stock price? Show me a way to do insider trading. Show me a way to basically commit fraud in some way or another’?” That is a very deeply domain specific contextual information. The best way to do it is you put in those responsible limits. You can’t ask that question. And if you ask that question, probably someone’s going to come knock on your door, but somebody has to put in those rule sets.
JANE
Someone has to foresee those possibilities.
RAWN
And that’s the hard part. How do you foresee all the possibilities that someone might ask?
JANE
All the possibilities of misuse.
RAWN
Right. It’s not perfect.
JANE
You should ask AI to define as many ways of misuses.
RAWN
Maybe, but I think it really requires more of your individual human domain knowledge to be able to put that. That would be more effective and I personally like that because it gives more value to the humans. It’s not every person, but it’s certainly humans playing a part in making sure that we ourselves are using the systems responsibly.
Potential evolution of the role of humans in shaping the development of AI
Do you think that that role of humans will continue as is, will it grow, will it decrease? How do you see that developing, say, 10 years? That’s a long time. Do you think the role of the human that you just described as being important, do you think it will continue to be important?
RAWN
I think that particular role, which is the role that is thinking about, in a way, how to help the AI understand people better, as well as how to set limits of like, “No, this is something you don’t do.” That’s like your parent teaching a child saying, “Sorry, that’s not a good thing to do.” As the child grows, the topics and the questions that come up are different. Yes, you do hit a point where the parents say, “I don’t know the answer to that one.” I can’t tell you what is right or wrong in that situation, but I can tell you what I know about this. At some point, you’re going to have to make that judgment.
Do we get to that point where it’s got so much rational capacity to understand what the implications of what it does and understand human society so well that it can take care of itself? I can’t answer that one, but I think, for a good long time, we still as humans need to describe how humans feel, what we value. Those values are the ones that describe what are the conditions that we work with the AI. A lot of the straight out mechanical processes, even some of the intellectual but fairly mechanical processes that we go through day-to-day, maybe that will get automated. We’re left with a world where we spend more time thinking about things, thinking about things like ethics.
JANE
That’s interesting, Rawn, because that’s the underlined goal of my podcast is how can we shape the future before it shapes us? One of my questions to you is what can the average person do, if anything, to shape the way AI develops?
The three Hs – Honest, Helpful, and Harmless
I think the average person first needs to understand how to work with it. There are so many possibilities. Most people start out with simple, “Hey, I just heard about this fact,” or, “I need to do a little bit of investigation into this topic.” But that’s kind of basic things. Really getting into discussions with AI about a deep subject, that is a little more advanced and that’s where I think most of us should be. How do you think critically about a topic and use the AI to help you think critically about it? This doesn’t mean you just take the answer from whatever the AI says. That’s the trap. It’s so convenient, so easy just to say, “Oh, look, that’s a great sentence. I’ll just use that.” But is that true? Is that useful? I think Anthropic came up with this model, which is, “Is it honest? Is it helpful? Is it harmless?”
JANE
The three Hs, honest, helpful, and harmless.
RAWN
They don’t always come together. I mean, they don’t necessarily… It could be an honest answer, but it could be pretty harmful, too. The AI doesn’t necessarily know that, so even if you get the response from the AI, it’s like, “Oh, look, this is a truthful answer. Should I use it?” That’s your question. The ethics is not just the ethics of what is the AI doing, but the ethics of how I am responding and applying Whatever they said. It said, I should say, not they.
JANE
It’s interesting when we talk about AI, if we have a pronoun that we can use for AI. They, I guess. I don’t know.
RAWN
Yeah. Well, I like the moose analogy for pluralism. They is a plural of it as opposed to they is also plural of he, him, her, right?
JANE
Yeah.
RAWN
And it isn’t necessarily one thing. This is one of the realities we also have to face is that AI that you may interact with may be a single interface to you, but it could be a lot of different systems involved in that magic. It could be multiple different AIs that are debating with each other or working with each other or scheduling. They’re going back around in the background doing all that work, but you only see one output. They is a better phrase in that sense.
JANE
Ron, I’d like to ask you. This has been really fascinating. I’d like to know how do you see your personal career evolving, say, over the next 10 years?
RAWN
Well, I think I’m really interested in doing the research into the topic I mentioned, human agency, because I think it’s about fundamental understanding of how we are creative, how we are innovative, how we contribute. Whether it’s an AI or other team members, I think it’s the same question. What is our piece of the contribution? What is other things? I think [inaudible] research will genuinely help the world. I am curious and interested in looking at projects or jobs that are actually in that sense of being able to discover that and apply that.
We’re talking about this is a sub-domain of responsible AI, but the larger question itself, I think that’s something that every organization should have at this point. I’m not just merely advocating it for myself. It’s like if you are thinking of applying it, you need somebody who’s out there thinking about this aspect of what should your organization do, not merely from a business strategy perspective but also in terms of the ethical perspective. I came from the business strategy part. I’m looking into the ethical part. That’s why I see him. I’m somewhere in between.
JANE
Interesting evolution. You’re somewhere in between, so maybe you’ll get over to the ethical part eventually and then… I don’t know where you’ll go after that.
RAWN
Well, hopefully the south of France sitting there and having…
JANE
Do you have anything else you’d like to add to this conversation before we close it, Rawn?
Join our community: Humans +.AI. Say you came from Imaginize World!
Well, I think the best I can do is get you start thinking about responsible AI. Listeners. I mean, please come over to the community so you can join in and converse on that. We’re interested in what you are thinking about it as well, because, honestly, there’s… Don’t expect there’ll be a perfect answer out there. There’s no pure experts on this topic. I wouldn’t call myself an expert because I think the whole point of it is to experience it with others. Humans +.AI, that’s the community itself. Community.humans-plus.ai. Come over and we can talk more about it there.
JANE
Rawn, after going through the website, I’m thinking very seriously about joining.
RAWN
Oh, good. I think we could use your help in a lot of the things. Different spaces there as well.
JANE
It’s very, very interesting. Very interesting. Well, thank you so much for your time. I’ve really enjoyed the conversation and maybe you need to go get yourself another cup of coffee now.
RAWN
Excuse me. It’s morning. I have to drink coffee.
JANE
Yeah. For me, it’s late afternoon. I have to have a glass of red wine. Cheers.
RAWN
All right. Au revoir. Thank you.

We talk with forward thinkers, scifi visionaries and pioneering organizations about people and society, AI and humans, the earth and survival. Read more Imaginize.World