Click on the links in the side bar to read the parts of the transcript that interest you the most.
Click here to go back to podcast page.
Below is the full transcript.
Setting the scene
If the future seers were always correct, like they are trying to say this AI is going to replace humans, we’d have Jetson vehicles. We’d all have flying cars already.
JANE
Today, my guest is Mark Grob, the immersive tech guy for UPS. He builds next gen products for UPS Enterprise. He’s focused on applications for training, real-time data visualization, and emerging innovation sectors. Mark was the third guest on my podcast, Imaginize World, and this conversation updates us on his thoughts today over two years later.
Really great to see you again, Mark. I bet a lot has happened in your world, in your VR world.
MARK
Yeah, I mean, the immersive tech stuff, AI over the last year got really important. We’re seeing very interesting trends on the hardware platform side. Yeah, there’s been a lot of things that have changed.
Using AI in immersive tech
Have you started using AI yourself now in your work with XR?
MARK
I would argue probably since 2019, I was using AI in our work. What they now call small language models is what we’ve been using in our space. So on edge sort of AI stuff. But from my work style standpoint, I mean, I’ve been using, I think it’s like Usemotion and a few of those other sort of virtual assistant platforms personally for a while. And inside of my organization as a whole, I mean, we’re using things like Copilot like every other big Fortune 50. We’re using Copilot, all those great tools for a variety of different things from creating business plans to help automate procurement.
So I kind of feel maybe I was ahead of the game, but from the standpoint of AI, I mean really you’re kind of almost forced to at the Fortune 50 level to use AI and-
JANE
Why are you forced to? What do you mean?
MARK
Oh, I mean there’s a lot of, how do I say it? There’s a lot of amplification of the benefits of the technology and also a heavy amount of investment for the technology, which we’re seeing that impact other areas in regards to people’s employment. That’s generally more of the focus. It’s the idea of the human having tools to sort of amplify to make their job more productive. That’s generally the mantra.
Will AI replace humans?
Yeah, that’s something that there’s a fear a lot of people have that AI will replace humans in all domains. I’m not just thinking about in companies, but in all domains, all kinds of work. But in fact, from my own little bit of research, it seems like AI is simply augmenting the capacity of humans. Would you agree with that?
MARK
Yeah. So I mean, if the future seers of the world were always correct, like they are trying to say this AI is going to replace humans, we’d have Jetson vehicles. We’d all have flying cars already. I think the idea of AI totally replacing people, it’s just not at this point of the technology. I really feel it’s being hyped, just like VR has been hyped, AR and everything else in the past.
Being an emerging technologist as well as an immersive technologist, we see this happen. I’m sure you can find sounds and individuals already saying and questioning the AI bubble. Really, my feeling is that AI has value to augment the workflow of any individual. It doesn’t have the, what I would call confidence to do what a human does to the point where, “Okay, we’re totally going to replace this human with a robot.”
Even with humanoid robots, there’s certain aspects, like you’re seeing in the home robot industry, they’re starting to start. A lot of those technologies are based off of observational methods or reinforcement learning that’s used with virtual reality tech. I’m virtually inside of the robot and I am doing the actions and it’s remote controlled and I’m able to clean your house. You’re still seeing the complexity of thought still needing a human. And the reason for it is like any technology, there’s guardrails, right? There’s a reason for guardrails because if we kind of do the whole, “Oh yeah, just let it go.” There’s a lot of danger. And if you’re talking now business, there’s a lot of potential brand damage to the use of AI.
So the removal of the human, I don’t see. Maybe possibly highly repetitive physical things that humans do currently. Maybe there’s warrant for that, but then you’re applying what they do in the auto industry already with highly automation. You’re applying the same principles to other areas. There’s still a human in the background overseeing, watching what is being done so that if there’s an issue, they correct it.
Also, with the advent of robotics and AI, there’s the opportunity for humans to upskill. So the aspect of, “Okay, you’re not fixing a fluids engine anymore. You’re now fixing a vehicle like it’s, be it a robot or a vehicle, standalone EV or whatever. You’re fixing it like you would a computer.” So the way of which the human interacts with the AI is changing. That to me is the more interesting part and the area of opportunity.
But the notion of the human being the weak link and the flow, I think the technology required to get to that level, one, in my opinion, isn’t there yet. And two, we’re going to run into issues of sustainability. How can you sustain that level of technology? We hear stories about data centers going up and they’re gobbling up water that would be the size of a small city just to cool the data center. Meanwhile, people are running computer agents to be virtual assistants to their daily job. Is that a sustainable practice? Who knows? But in my opinion, I think the humans aren’t the weak link. I think they’re really the guardrail to prevent brand damage to a particular business that’s trying to implement AI.
Hype or niche markets
What strikes me is that all the information that AI uses to do things comes from humans. Whether it’s a large language model or a small language model, everything is being provided by humans for now. Although I think in medicine, there are things where AI can discover things that humans cannot see. So I find in my research, a lot of the interesting examples come from medical institutions and they seem to me, they seem to be farther ahead in using AI in a sustainable way than most organizations.
MARK
Yeah. I mean, well, I’ll go back to the word of hype. One of the big issues is like any of these niche market, AI doesn’t want us to say that, but it’s still a niche market. It’s a case where there’s a lot of promise, there’s a lot of opportunities, but in so is the hype. A lot of what I would describe as sort of winning solutions for AI right now are common solutions. They’re not very specific. And anything in enterprise needs to be specific.
Just like your use case where you’re talking about medical, right? We look at the old mapping of the genome and those aspects. Those are things that just from a mathematical standpoint, a very complex concept that a human would have a hard time doing when there’s benefits in that sort of rapid repetitive of analysis. Those are areas where it’s beneficial.
In our case, we’ve done things where we’ve used virtual reality platforms and allowed AI to observe humans and how they interact with just simple things like lifting objects, learning sort of what’s the behavioral trends based on the weight of a product, the size of a product, and the area of which it’s going to be loaded. And then to have the AI understand or start to see sort of neural pathways around the mannerism of a human. You see benefits in the sense of, well, now we have gained stability of a platform because the AI now understands through the human observational methods that, oh, you have to adjust for weight when you’re picking up objects. And it’s going to change based on the weight of the object that human’s picking. AI can inversely view that and then it can create sort of baselines of neural networks to actually start providing sort of that on edge solution to make the product better.
And that’s where the fallacy of the human falls to the fallacy of the AI. In the end, you’re modeling an imperfect thing, so it’s never going to be perfect. So we’ll see where that goes. But where AI is right now, my view is sort of it’s over-hyped and it’s not sustainable. When we can get the technology to that point where there’s a high level of confidence, I’m not three months later resetting the model because people are prompt hacking a RAG method of AI and things like that, then we’re going to be in a better place. But still, the human needs to be present because environment is not sterile like it is in an AI research lab.
So once we perfect those sort of things, we’ll be maybe a little bit better down the road, but there’s also a lot of things that are now growing around the sustainability of the practice where in the investment community, maybe they’re going to want to start asking the right question.
What is RAG?
Yeah, I have a couple of things there. First of all, if you could, for our listeners, I think a lot of people don’t know what RAG means.
MARK
Yeah. So I’m going to try to really simplify it. I’m not even going to give you the textbook definition of RAG.
JANE
Okay.
MARK
Basically, think of a sort of data structure model that allows you to inject subject material. So in our case, maybe it’s a process of how to do something within our organization, or it’s what we call a trade secret, right? And we can talk about security because that’s the next sort of part of this, right? But the ability to sort of create an array of documentation and then allow the AI to extrapolate tokens or concepts off of those assets and then create a bias or a sort of reasoning behind that material so that then it can be re-queryable. That’s sort of what RAG is.
So think of it as I create a training in virtual reality or augmented reality and we have procedures for that training. But what we do is we take those procedures and then we can add things like AI understanding of objects that relate to the procedure. So now we can have the AI infer or make assumptions of conspatial, contextual objects to assist in the training. Think of it as you’re throwing all these documents into a basket and then the AI is very quickly able to make, I’d say, reasonable assumptions on that information so that when it’s queried against, it can give a certain level of confidence to the right answer versus the wrong answer. And that creates that feeling of, oh, this has some sort of intelligence.
JANE
You mentioned training and the last time we talked, you gave a lot of examples of what kind of stuff you guys are doing in training. It’s really, really interesting. People really liked it. We did a little mini video that we put in a short that was very popular. How are you dealing with training now? Have you integrated AI into it?
Integrating AI into training, like Socrates
We actually had a very successful pilot program for our different silos for training where we implemented a low code solution and AI for refactoring or updating some of the training practices we’ve done in the past that were solely VR. So we’ll go to the example of how to pick something up or how to load something in our operation. We’ve created simulations now for training where instead of it being very babom bom bom procedural in how to do the thing, do step A, do step B, do step C. And we had traditionally events tagged to those, and then that was sort of the determinant of your success for learning that. The interesting thing with AI is when you combine something like RAG and you combine something like procedural content generation, what you’re able to do is you can have a more Socrates training approach for the lesson.
So for example, we may have a loader that will not be prompted to, “Okay, you’re going to do this.” Instead, they’re going to ask the AI and say, “Okay, well, I need to load a particular vehicle. How do I load that?” And the AI would be like, “Hey, this is how you load it.” It’ll give them sort of a very persona response to it. And then at that point it might suggest and say, “Hey, we have simulations available for you to reinforce or if you want, learn particular steps of what I just mentioned, are you interested?” And then at that point, we see a higher level of engagement by the learner to want to learn that information. So that’s sort of one area where we saw really through that sort of fast procedural creation model, just like you’re seeing where you’re taking a 2D image and making it 3D, we’re sort of doing the same thing but to training.
You’re getting a lot of cost savings, right? We’re able to map things like 80%, 90% cost savings because we don’t have to hire a studio to do it. Instead, we can have the AI do it. And it’s doing it sort of, I would describe it as a focused, because you were talking about AI is good when it comes to very focused things, where it can very quickly create sort of phases to train someone on something. And I think when we had our discussion last, I talked about reps.
JANE
Yes.
MARK
AI is able to very quickly change things slightly based on variables it observes in the user so that they can become more proficient in a specific area or phase of what they’re learning. And that’s where we’ve really kind of saw a lot of really interesting sort of results and we’re sort of advancing that in the coming year.
Closing the door to sharing information
That sort of leads to the next question I had, which was something related to our earlier chat where you said that companies are beginning to close the door to sharing information about AI because of corporate competition. And I asked you if that was still the case today, and you said it was a good question because in the US, you cannot patent or secure your IP due to legal decisions in the past. I wasn’t sure what you meant by that.
MARK
So I’ll explain that a little bit more. One of the reasons why large organizations are closing the door, they’re not disclosing to the public their findings or sort of some of their projects. Case of why I’m always asking, “Can I talk about this?” Is that the framework itself of AI can become very challenging in the US to patent or to protect particular methods that you use because you’re using AI. And the reason for that is the Supreme Courts of the US basically made specific decisions in the past about, if you have AI create something, is it trademarkable? Is it copyrightable? And there’s been a lot of debate in the US about this. And for some of the things even we’ve done, it’s a case where initially I had hopes to share it, but because of those rulings, it’s a case where we sort of treat some of the things we’re doing as sort of a trade secret because we want to reap the benefit of being to market first over our competitors.
And that’s really where that closed door sort of happens is because the technology stack associates things like AI. And if it’s not sort of that generic use case that I was talking about before, the chatbot that tells you where and how to buy a product, everybody’s doing that. But if there’s something more niche, more specific, lawyers question if AI really is influencing the capacity of that IP and to protect yourself in those situations, it’s best to keep quiet. Hence, now the closing of the door. We don’t want to share. We don’t want to bundle it in a way that it becomes a product that’s resellable or franchisable to a partner or to the public, which goes back to sort of the secrecy, the guardrails, the aspects of AI loves to talk, loves to tell you about it, loves to do all these great things. But part of that wonder, the magic right now is the fact that when you’re looking at some very niche sector solution and implementing AI with it could sometimes be a detriment because it could create problems to your brand.
JANE
But using AI is perfectly legal, of course.
MARK
Yes.
JANE
But what is the problem? Why does a lawyer not want to hear about using AI in a particular product that you want to …
MARK
Well, it’s not that the lawyers don’t want to hear about the AI being used in a particular product, but what they want to do is they want to mitigate risk of AI to the product. And in the past, things like competitive advantage could be protected using patents. And if you submit a patent to the patent office in the US and it has AI and they perceive the solution to somehow be relevant to case law that says otherwise, well, that the particular business method or process that you’re using is not patentable because of the association of AI, right? Because AI uses “public knowledge.” And public knowledge is considered common knowledge, hence it’s not patentable.
JANE
Right. That’s very interesting. I don’t know what the case is in Europe. I don’t know if there’s a case across all of Europe or if it’s country by country. Do you know?
MARK
I’m not that strong. I might have a topic we’ll talk about next year that I can’t currently talk about, but I think certain aspects of AI within Europe is better guardrailed from the standpoint of regulation, but I kind of feel that AI is sort of open market territory. And I think even in Europe, if you ask around, I think you’re going to find at the corporate level, there’s a lot of door closing where previously as part of product value, they’d be willing to share. Now it’s a case of, “Well, we really don’t want to talk about it because either A, some of the methods that we have are not protected and can easily be replicated by a competitor.”
JANE
Yeah. Yeah, that makes sense, doesn’t it? Given the legal context that they are in.
MARK
And the level of investment that is required to truly implement this type of technology and enterprise currently.
JANE
I remember when I’ve started doing research on this article I’m writing right now, one of the things I noticed was so many of the case studies that I discovered that sounded fantastic when I started reading them word by word, there were lots of, “This can be what we are doing. This may be the result of …” There’s so many cans and maze in the articles that they’re interesting ideas, but I was not able to use them as an example in my article because they weren’t certain enough. Now, maybe that’s partly for protection. I don’t know, but I found that really interesting. And then of course we have all the big consultants who are promoting AI as much as they can because if they can show how much good it does, then they’ll get more business. It’s a money thing for them.
AI and the greed play
JANE
Yeah. I would be surprised to know if large companies use these large consulting companies very much in their AI stuff, or do they tend to hire someone, maybe hire an employee, grab an employee from an AI consultant, from a big consultancy, and then put them in their own company, which I think would be the smartest thing to do. If I were a CEO, that’s what I would consider.
MARK
I think it’s an interesting question. I think maybe your answer’s going to be certain consulting firms would advise one way versus other consulting firms advising another way. How is that going to play into the corporate leadership? Because in my opinion, corporate leadership generally tends to be risk averse, right? They’re more interested in protecting their brand, but they need to maintain competitiveness. So what’s the best approach to that? Some organizations may go one way versus the old outsource because the perception is outsourced innovation is quicker and insourced, but you’ll find difference of opinions based on that assumption.
JANE
Anything else that you’d like to talk about now regarding AI? Any point that we didn’t cover that you think would be relevant?
Is AI sustainable the way we are developing it?
Well, I mean, I think the big thing would be, that we really didn’t talk much about is the sustainability of a practice, right?
JANE
Ah, good, good point.
MARK
Doing a lot with emerging technology like immersive tech, VR and AR, right? These are sort of what I call hype technology platforms, right? And typically these technologies fall out of fashion more so because the hype is bigger than the ability to either execute or create the product that people expect based on the hype, or the practice is not sustainable in the environment. And I think we’re not asking enough questions about the sustainability of AI in the practice. I think that’s where I’m finding now through fellow colleagues as well as the professional networks, I’m finding more and more professionals are asking that question, which is leading to sort of the shakes of, “Oh, is the AI going to bubble? Is the bubble going to pop?” Because when you look at certain fiscal trends and things of that nature, you have to raise the question, how is this sustainable?
Normally technology needs to be sustainable and accessible for it to long-term succeed. And AI currently in its current form, it’s questionable on both sides. How accessible is the tech as a whole and how sustainable is the technology to maintain?
JANE
By sustainable, are you talking about use of water, environmental sustainability or …
MARK
I mean, you could say that, right? In some scenarios, it’s a case of trying to develop a data center in a geolocation that doesn’t make sense. So sustainability also in the regards to the rate of which the technology changes, right?
So within the space of virtual reality and augmented reality, we went from seven-year cycles down to six-month cycles. And the technology got to a point where at the six-month cycle, the adoption of the new methods and technology started not getting adopted because it was just to a rate where practitioners or integrators couldn’t integrate the technology fast enough.
So if you think about AI, these models that these organizations implement, they change in three, six months, and the big names require you to update your infrastructure to support those models. So now you have a very cyclical but rapid system change. From the enterprise standpoint, that’s a challenge because that’s going to eat your resources, be it human or fiscal. And that’s just to maintain these processes and workflows that are relying on AI. So now being a practitioner who’s done things like immersive tech, I start questioning how sustainable is that practice?
If you are hiring the latest and greatest programmer because it’s a very unique niche position and it has a high price tag because they’re in demand, how sustainable is that practice if the mantra is, “I only need them for six months.” And then six months later, the whole concept of the framework has changed. So what do you do? That’s a consultant’s dream, right? That’s how I always say it. That environment is a consultant’s dream. But from the standpoint of business, there’s the challenge of sustainability. Tech debt, code management, those are huge things that I think larger organizations more and more now are starting to look at because we’re past the quote, “Building a POC to show value that it works.”
Now it’s a case where a lot of these AI workflows are being instituted in a production environment and they need to be maintained and sustained. And from our practice on the XR side, there’s challenges because I was talking about even just with RAG models, even if it’s a secure instance version of that model, what I mean by that is it doesn’t talk back to the core frameworks of AI to a public domain. It’s all secure. We still find that what humans inject into that instance can daydream, can get biased, can change, and then we’re resetting, updating, and moving forward.
Long term, is that truly going to be maintainable? We may be getting quote “lower headcount for repetitive tasks,” but if the value we’re perceiving doesn’t earn out within a couple of months, is it sustainable? Is it truly a sustainable practice currently? Those are the questions that I would give to your viewers right now is things that you want to think about when you look at these sort of very rapidly changing, highly hyped platforms of technology.
JANE
Well, if I have any viewers who want to know more about them, I’ll tell them to get in touch with you.
MARK
Sure. I’m on LinkedIn. I think everybody has interesting viewpoints on technology. There’s a lot of large amplifying horns on the greatest things of AI. Not too many horns on, well, from a practical standpoint, because you know how I am about practicality of technology.
JANE
Yes.
MARK
I don’t think there’s a lot of people talking about the practicality. Always open to those discussions.
What will the big issues be in 5 to 10 years?
Okay. Mark, how do you see your future, professionally speaking, evolving? Say over the next, let’s take either five years or 10 years, whichever timeframe you feel like talking about, or maybe I should say six months. No, I’m joking.
MARK
The six months is a bit short. I’ve been asked the 10-year question a lot. We actually did a fun event with the Boys and Girls Club of America, and one of the students actually asked me that, and the answer I gave them, I’ll give you. And that is I see in 10 years wearable technology potentially being much like a laptop of today. So what that wearable form is, it’s tough to say, right? There’s fashion.
If the geopolitical environment continues sort of the way we’ve been seeing it in the last year, I question that, I think more pessimistically that we’re going to be pretty much similar to we are today. But my hopes from an optimistic standpoint is that I would see something along the lines of if we have wearable devices, AI glasses, AR glasses, [inaudible 00:26 :18] glasses, whatever you want to call it, in 10 years, I could see what traditionally would be like an executive secretary role. I could see that role being an AI assistant very easily. I could see a world where subject matter experts have tools that will allow them to more dynamically maintain and update their practices in one form or another.
I think, or I hope that robotics won’t be such a human buzz kill. I think realistically, technology can always influence people’s perceptions of your products and brands. And I think there’s going to be an interesting balance in that space. But for what I do, I see sort of why I describe it as sort of an AI assistant that maybe would be like a junior dev capable agent. That’s 10 years out, five years, maybe, maybe not. We’ll see how things progress.
But really, I still see the human, my role, I still see my job being prevalent because it’s about innovating, it’s about creating solutions. And from what I’ve seen from sort of the AI vibe coding platforms and things like that that are out there now, I think realistically, it’s not going to progress that much. Things might be easier. You might not have to know how to code a specific language to build something, but you’re still going to need individuals that know how to securely and safely integrate the technology, because I do see in 10 years systems in a way being more connected, but also they need to be more secure.
That’s kind of where I’m seeing the world is the fashion of which we get our data, I think is going to become more casual or more wearable. But the challenges of that information being secured, that’s where I see still challenges in 10 years because in the end, we need to, like I would talk about with training, with any sort of technology, we need to be confident and we need to trust those platforms, those solutions. And I think that’s where in 10 years it’s going to be interesting, a sort of hybrid, not so optimistic that humans won’t be needed, but a case where I would say not even a majority, half of the repetitive things we do, we’re going to use tools to do it, and that’s probably going to be AI.
JANE
I had a really interesting conversation with a guy that you ought to meet sometime. He’s a professor of robotics at the University of Michigan, and boy, he gave me a whole different feeling for what robotics was about and the direction it was going. And he’s sure got a lot of views too. I think it’s a topic that caught a lot of people’s attention. He covered a lot of different angles. I think people interested in AI and robotics would probably like to talk to him. And because he’s a professor, he’s very comfortable answering questions, debating, having disagreements and so on like a professor does with students. And so it’s quite different than when you talk to a CEO or a director of whatever inside an organization.
So trust is the main thing that you’re talking about that we need in the future. We need it now and we need it even more in the future. And another question I had, but maybe we won’t go into it in too much detail is what are the ethical dilemmas that we have today? Do you have any particular thoughts on that? Because of AI, have our ethical dilemmas changed?
How can we build trust in the digital age?
Yes and no. The way we frame ethics and the way I would say the common individual views ethics has changed because ethics also encompasses things like empathy, the ability and willingness to communicate with different people. I feel the technology has not improved that. Similar things to when we talk about virtual reality and multi-user experiences, one of the trends we were finding was, yeah, the idea was you were supposed to be in this great sort of metaverse of people and you’re going to meet everybody. But what we’ve found was there’s a lot of isolation. Even if you look at things like the implementation of Teams in organizations, gone are the days where you’d meet face-to-face.
When you meet someone face-to-face, it opens doors to collaborate. You share empathy, you share stories. There’s almost a physical as well as emotional connection to an individual. When we talk about technology like AI, there is none of that. So the concept of a human not being needed, there is no empathy. It’s just following a process. And the hype, sort of global hypage of AI, I feel it sort of gives the human an excuse not to be as empathetic, to not want to cross the aisle, talk to someone, have the debate. And I think that’s unfortunately kind of the trend is even in corporations, you hear about AI selecting candidates for a position. And it’s like, okay, well, now we’re gamifying it. If we know a particular system, people or actual websites that provide services are able to game that system to increase probability of selection.
That’s where for me, there’s a lot of concern, or potential, I call it brand backlash because of AI. Because again, we are human. There is a certain level of interaction that you and I have that I enjoy. And when you have processes, I use that word strongly, processes in place, processes remove that.
JANE
They remove trust, don’t they?
MARK
And they remove trust.
JANE
The interaction often comes, trust comes from physical interaction in my experience.
MARK
Yep. And even if you think about it, currently right now there’s a human that does something on a daily basis. And it may be once a week we have a meeting with our global teams. We grow trust with those individuals. You can’t grow trust with AI by prompting it.
JANE
Yeah.
MARK
Right?
JANE
Yeah.
MARK
So to your point, that’s where I see there’s been a decline. People believing Sora videos that are AI generated. And with that, creating distrust with certain people and certain types of individuals, that’s the disconcerting aspect.
JANE
So what are your final words of wisdom?
Final words and wisdom: Get past the smoke and mirrors
Don’t fall to the hype, no matter what it is. I’m going to repeat what I said before. Be practical, get past the smoke and mirrors as quick as possible. People have to think, have to want to actively think about something more and more so with AI because AI is going to tell you what they want you to hear, not what you really care about or what you want to know or what you want to believe.
JANE
Yeah.
MARK
You can be manipulated. So be practical, question it because in the end, we made the comment of, on the corporate side, AI is about greed. It’s about being more profitable at the cost of the human trust, the human empathy. And is that a wise direction to go? Taking the time, thinking things out sometimes is a good thing, right?
JANE
Yeah.
MARK
Be practical. Move on, innovate, kick butt.
Kick butt. That’s really your final word. Thank you very much, Mark. It’s been very, very interesting. We’ve gotten into things I wasn’t expecting and that’s great.
MARK
And that’s why I love these calls. I love these discussions. I get to go outside of what I call my innovation silo. That’s the fun part, always talking to you, Jane.
JANE
Oh, wonderful. And the fun part for me of talking with you is that you are advanced from a technology viewpoint, very advanced, and yet you always have a human approach to things. And I think that’s really important.
SUBSCRIBE TO IMAGINIZE WORLD ON YOUTUBE
We talk with forward thinkers, scifi visionaries and pioneering organizations about people and society, AI and humans, the earth and survival. Read more Imaginize.World
