Will technology have our back?
Trying to have a deeper understanding of technology and its impact as an enabler for people to do work, communicate, to essentially have something behind them, something that has their back as they’re doing what they need in their own life.
Sustainability and generative AI
So sustainability from my perspective, is understanding the limited resources that we have here on Earth. Being able to understand, we don’t necessarily have a history of using renewable resources, things that are not of abundance, things are limited, and switching from things that are not renewable resources to renewable resources or things that are in more abundance and not of limited quantity, but still give us the same quality or close to what we’ve been doing with limited resources, which are also sort of damaging our global home. So being able to have, essentially looking at technology where we will be in the future, needs to be able to be friendly or kind to that sustainability.
Seeing things that humans cannot see
As we look at generative AI, it is doing something very close to that as far as computational power that is required to churn out things with hallucinations. So what is the value of what we are getting out? How can we have the technologies that will have our back? The traditional AI ML models, they can be somewhat computationally expensive as well, but being able to see things that humans cannot see and being able to have predictive models and understanding, “Oh, we’ve got a problem with this.” So being able to understand as climate is changing, wind models, other things that we are depending on as renewable resources, seeing those shifts before they are humanly detectable.
Humans got out of caves by sharing knowledge
Is it hurting the people who create original content?
Coming from a background where there’s sort of academic proof of where ideas come from and being able to point to things and being able to say, “Oh, I got this idea,” and it’s also just a really good human trait to be able to say, “Oh, I read this from Jane. Jane says this, go read Jane’s piece. Here’s my take on what Jane is saying,” and not being able to have that human link back and giving credit. And it’s like the whole model for how humans got out of caves is built on sharing and sharing of knowledge, how we share knowledge, how we share good information to move ourselves forward as a human race. And improving the human condition broadly relies on us sharing information and being able to point back to truths, where did this come from, what’s the background on this? And being able to lean on things. And generative AI for the most part, doesn’t care about that. It is just spitting things out.
Protecting content providers, protecting humanity
And it comes down to there’s sort of security and privacy constraints around it, and also being able to sort through what is truth, what is not truth, what are sort of made up facts and other things coming out of a background where you do academic work or even professional work, and it’s like, “Well, where did you get this? Is this a hunch that you have or is this something that is actually proven and you have something to back it up?” And so it’s being able to prove your work, prove where you got something. I have strong feelings toward being able to have that and being able to know where something came from.
I wonder if it’s going to come down to legal questions.
A lot of the legal and regulation that has been talked about within the generative AI community, the companies that are out there now have a lead more or less, and essentially the regulations that they are talking about are protecting their lead rather than protecting the sources and protecting humanity.
Students not learning from generative AI
Looking at students who are using it, also talking to an awful lot of people that do sort of personal knowledge management and heavy note-taking and they’re like, “Oh, have a generative AI go out and read this article for me. Give me a summary. Have 1200 words turned into 300 words,” and they put that in their notes. They didn’t learn anything from it. The ability to go through and when you’re reading through something and essentially having an argument with it in a, do I agree with this, do I not agree with what this article is saying, is a really important part of understanding and building your own knowledge base in your head.
Giving pointers to good resources
That saves time if AI is pointing students to things that they might otherwise not come across.
And being able to use sort of a generative AI tool for learning to understand foundational issues that you may have a gap. And if you’re taking your third semester of computer science or environmental theory and you have a concept that you just have completely skipped or you got a D on that test, you need to understand what that is. And it’s like, “Hey, can you give me a good overview of what this is? Can you give me more information? Can you give me pointers to good resources?”
Bringing together people who are near in thought
There’s sort of a tension that I see between technology and humans interacting with humans, being able to bring together those who are near and thought, which technology is great for, it’s like I had met you at KMWorld, I think the first one in DC.
That was a long time ago, wasn’t it?
Yeah. And that was my second KMWorld. But we got to know each other essentially through digital environments that were bringing people who are near in thought together. We’re geographically very far apart.
Human-to-human interaction
There’s sort of a balance watching what is happening in the world right now. Having that understanding of human to human interaction and how to get along and how to work together is highly important. And seeing things that are happening, I think large parts of the populations have lost that.
Banning the word trust
I think trust comes into play.
Right. I have a difficult time with the word trust, not because of trust itself, but because the word has many different meanings. And in an awful lot of my consulting work that I did, people would lean on trust because it is a very powerful word. Around 2008, 2009, I started banning the word trust and you had to use other words.
Oh, that’s interesting. What words did people use?
One of them was comfort. I find there’s comfort that I have confidence in what they’re saying. There’s about 10 to 12 different terms that people were using regularly.
Bringing the technical and social sides together
If large multinational company that ships books all around, which I may have used more than once, but being able to have them and recommend something to Susan’s Book Nook, they may get a cut of it or 5% or 2% for doing that recommendation. They’re making money on it. It’s bringing connection to your local community. So you’re essentially looking at the technical side of things and the social side of things, bringing them together. So we’re coming together as humans, as well as also increasing human knowledge and understanding.
Sustainable technology – criteria for the future
Technology really needs to sort out where it sits on sustainability. Those who are able to work through one, having more secure systems, having more systems that respect our privacy, being able to have coding and systems that are far more efficient, also using renewable resources for that, not necessarily swap credits or credits, “We planted 700 trees, so therefore we can set up this new server farm.” But being truly using renewable resources and not doing the trade-offs. The faster we can get to that from a technology side, the more that we can do with technology and essentially have technology have our back, rather than having technology essentially becoming part of the problem rather than part of the solution.
For more from this interview, subscribe to Imaginize World on YouTube or wherever you listen to your podcasts.