
RealmIQ: SESSIONS
RealmIQ: SESSIONS is the podcast where we dive deep into the world of generative AI, cutting-edge news and it's impact on society and business culture. Listen in on conversations with leading AI Experts from around the world. Our relationship with technology has undergone a captivating transformation. Machines have transcended the role of mere aides; they are now instrumental in fundamentally reshaping our cognitive processes. In this context, AI evolves beyond an intellectual collaborator; it becomes a catalyst for change. Hosted by Curt Doty, brand strategist, AI expert and AI evangelist.
RealmIQ: SESSIONS
RealmIQ: SESSIONS with TIM EL-SHEIKH
In this season 3 episode of RealmIQ: SESSIONS, Curt talks with Tim El Sheikh, founder of Nebuli and host of The CEO Retort podcast, joins from London to unpack his extraordinary journey from biomedical science and competitive sports to launching one of the world’s first augmented intelligence studios. The conversation explores his early exposure to tech in Abu Dhabi, the collision of sports and science in his life, and the ethical dilemmas that led him to create a human-centered, VC-free AI company.
Together, Curt and Tim examine the broken state of AI ethics, the commodification of personal data, why regulation is necessary but insufficient, and how trust—not scale—will define the next wave of innovation. They close by imagining a utopian “solarpunk” future where ethics and imagination guide AI development, not greed or dystopian fantasies.
Topics Discussed
- Tim’s origin story: From martial arts and basketball to biomedical science and coding
- Founding an ad network for scientific journals pre-Google Ads
- Early ethical red flags around user data sharing in biotech
- The birth of Nebuli: ethics-first, VC-free AI studio
- AI in science, healthcare, education, and the environment
- Dangers of unregulated AI and irresponsible VC-driven tech
- Copyright, data scraping, and etiquette in AI training
- Politics, misinformation, and the Hollywood-ification of AI
- Why we won’t reach AGI in five years (and maybe never should)
- The post-hype AI future and the necessity of trust
- Utopian vs dystopian narratives: choosing imagination over fear
- How creators, not governments, may save AI’s future
Quotes
"There's no intelligence in AI. It doesn't know you're a child. It doesn't know the content is toxic."
— Tim El Sheikh
"I realized I could no longer be part of a system where investors said, ‘Let’s just sell Kurt’s data.’"
— Tim El Sheikh
"We pride ourselves as one of the very few AI companies that have no VCs. That independence is an advantage."
— Tim El Sheikh
"Innovation isn’t stifled by regulation. What stifles innovation is lack of imagination and lack of funding."
— Tim El Sheikh
"Trust is going to be the most important differentiator in AI after the hype dies."
— Tim El Sheikh
"If we can terraform Mars, why not regenerate the Sahara? There's still oxygen there."
— Tim El Sheikh
"Hollywood has warped our understanding of AI. We don’t need humanoids. That’s not where the magic is."
— Curt Doty
"I’m an AI optimist—but I’m realistic too. We need to expose the good and the bad."
— Curt Doty
Sponsor our show - https://www.realmiq.com/sponsors
Receive our weekly newsletter: Subscribe on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7024758748661391360
Sign up for one of our workshops: https://www.realmiq.com/workshops
Are you an AI Founder? Learn about our AIccelerator: https://www.realmiq.com/startups
LinkedIn: https://www.linkedin.com/in/curtdoty/
Need branding or marketing? Visit: https://curtdoty.co/
Welcome, everybody. Let's get into it. This is season three and I'm so excited to introduce today's guest, Tim El Sheikh, coming all the way from London. He is the host of the CEO Retort, as well as the world's first augmented intelligence studio at Nebula. His background is as a biomedical scientist, he.
Quite the background, sir. I'm so happy you're with us. How are you doing today, Tim? Good. I'm good. Thanks for having me. Great to be here. Awesome. Well, you know, I, I've been following you for at least a year, if not two, on LinkedIn, and I appreciate your candid, pithy commentary on not only ai but politics and technology in general.
And I, I think we're certainly a kindred spirit. And I appreciate your voice and how you approach things and, and how you're pushing. Certainly, ethics and ai. I know that's a passion of yours. So, let's just talk about you, your background. How you, if, if you were a pro athlete, how'd you get into technology and ai and then provide an ethic layer on top of that background.
Let's, let's talk about that. Yeah, it's I wish I could say that it was all part of a plan of some kind. It was purely all accidental. I'm just a very, very lucky person. I think the technology on sports kind of somehow collided. The planets just accidentally collided because. I was one of those pretty, I, I suppose, annoying children, like very badly behaved at the time.
My father was one of the sorts of early immigrants to the United Arab Emirates. So, we lived in Abu Dhabi when he was still desert, pretty much. And so, they didn't have that many schools and I was pretty much literally kicked out of every single school when I was a little kid. So basically, my parents had no choice but to somehow take me.
To work. And my father was an engineer. And he worked like in his organization, he was a part of the government at the time. They had these huge data centers, you know, those like very, very old data centers that you see. And he got zilla movies, like one of those. Mm-hmm. So that was kind of my first exposure to tech.
cause he used to babysit me there. Like, he literally would take me and put me inside one of these things. And that was for me, that was heaven. I just loved tech ever since. And I was about five. Similarly, they had to find a way to figure out what was then, the cause of this energetic, negative energy that I had in terms of, you know, bullying kids and, and so on.
So, they sent me to a martial arts school, and as it happens, Abu Dhabi has always been the best location for martial arts. So that kind of, you know, so basically, I, I trained at the very, very high level. They pushed me, you know to get into competitions karate competitions. I competed the first time when I was about, I think eight, eight or nine or something like that.
So that was my passion for sports emerged from that point of view. And the two kinds of started to grow together. And so, yeah, I mean, I, I learned how to code when I was about 10. I loved gaming. And I think it was really my teenage years that I started to kind of look more into more interesting things that, that I thought were fascinating.
So, for example, I loved robots as all teenagers do in the eighties and the nineties. And also, I, I kind of, I, I lost passion towards martial arts, even though I still train today, but I didn't like competing. For me, martial arts is more of a way of. Kind of discovering yourself rather than try to beat people up.
But I discovered basketball, so that was kind of the true passion of sport that I had. So, I actually, I am a basketball player, even though I'm not very tall. I'm barely six, six feet tall. So, I'm a point guard and again, these two journeys, you know, went in parallel. And in the uk I, I started doing my degree in biomedical science, which is where my, my biomedical science background started.
And also, I started training for my university team. And we trained at the championship level. I was part of the first team that managed to drag our university out of the gutter. We won local regional championships. We dragged the university for the first time in it into the national championships.
So that was really cool. And the local team, the Leicester riders. Basically, discovered me and, and one other player. And that was the same time that I was doing biomedical science through which I discovered ai and that was all happening in, in the late nineties. So, we could delve deeper into that if you want, but this is how, basically the two worlds always been part of my life, my entire life, literally.
Fascinating. So, let's talk about the 21st century here. What, what is your path that led you to opening your own AI agency and, and what is your mission and let's talk about Yeah. Well again, it's a journey, I guess. I mean, I became an entrepreneur in the world of tech in probably I think 2002.
And it was my first, the first company I created was an ad network, so that was pre-Google ads, and it was the first ad network for science. So, we, we, we were looking at how do we use again, AI recommendation system, which basically what we see in social media today as a way of helping scientific journals digitize.
cause at that point, people in science, were still dependent on print media. They were still trying to, you know, sell ridiculously expensive ad pages, you know, inside journals and so on. And we thought, you know, hey, there's an opportunity. The internet was a new thing, kind of like the way AI is these days, and we just wanted to help scientific publishers digitize and take advantage of online marketing and so on.
But this is where we, we started to play around with AI building AI recommendation systems. It was all fine, but I, but I think the first time that I realized the ethical side of things was a problem was when the publishers were quite openly sharing each other's user data. You know, so they were like you know, say publisher A would say to publisher B, Hey, I would like to beef up my user base by like a half a million.
Can you kind of borrow some of your users? In fact, it's a practice that is still happening today among social media companies. There was research that I shared on my podcast where I think it was Facebook that they found where every user, every user's, data, individual user is shared with around 2,300 companies.
And that includes some of the bigger companies like Amazon, Netflix, et cetera. So, it's a practice that's been going on for a long time. And that wasn't, I wasn't comfortable with that. But then the final nail in the coffin as it were, that we had investors where quite openly were saying to us, Hey, why don't we sell user data to data brokers?
I thought, no, we can't do that. I mean, you know, people literally entrusted us with their data. Especially in biotech, it's really sensitive, right? You don't, you don't just give data away, but the thing that really. Shocked me. It's not about that They were not ethical as individuals, but they were just pretty blase about it.
You know, it's like, Hey, you know, you know, let's, let's just sell Kurt's data. Yeah. Who cares? Right? Let's just jump in the pool. Right. Basically. And I wasn't very happy with that. And then there's sort of the next stage, basically Google ads emerge and basically steamrolled all of the ad networks, including mine.
So, it's one of those things, I guess, but, but that gave me the opportunity to start looking at other areas where, for example, how can I use AI to solve more sort of wider societal problems? At that point, that was around 20 10, 20 12, where we started to see the emergence of misinformation online and deep face.
Again, that was already a thing at the time. And I thought, okay, you know, we actually know how to build recommendation systems. We know how to build language models or micro language models. Maybe we could use similar techniques to try and identify misinformation online. And we thought we'll start predominantly with science because at that period the Prime Minister at the time in the uk, David Cameron.
Was talking about that the biggest national security that we were facing in the UK wasn't terrorism but a pandemic. And as somebody that started by medical science, we actually studied epidemiology, and we said that one of the biggest problems in pandemics is the anti-vaxxer movement and the spread of misinformation.
So, I thought, hey, maybe that could be a tool that we could build a tool where we could potentially help governments and indeed agencies, et cetera, to figure out what's actually being spread online, whether it's real or not, or fake, et cetera. And we actually pitched to Google, and we joined Google Campus and we're a part of the same cohort as DeepMind.
So was very lucky that we had, had amazing cohort around me. And again, that was the period where even though, generally speaking, I always like to share this on LinkedIn, that we didn't really think about a GI, we didn't think about using AI to replace humans. Everybody there was trying to solve a problem, right?
So, I mean, I describe my problem. For example, DeepMind obviously using AI to solve scientific problems. There were loads of other guys doing cybersecurity, et cetera. So it was, everything was kind of mission driven. But we had a lot of debates in London for sure about the future direction of ai, and around 2014, I started to see this.
Of course, we've seen that predominantly social media companies, but I started to see this more, this I, this campaign as it were, where people were trying to push AI to the web without any guardrails, human intervention. This idea that, you know, let, let, let's outsource some of that recommendation, user recommendation, content, et cetera, via ai.
And I was like, you can't do that. You, if you do that, that will be a problem. It, it'll, you know, it doesn't, there's no intelligence in ai. This is part of the problem that people think there is intelligence. There is no intelligence. It doesn't care that you are a child. cause it doesn't know that you're a child.
It doesn't care that the video or content is toxic. cause it doesn't know that it's toxic, but Right. Unfortunately, people didn't, didn't listen. Yeah. And that was kind of my second period where I started to realize, again, this sector as a whole doesn't seem to care. Right. And I felt, okay, something is to change.
And that was kind of led that, that was sort of the journey that led me towards Nebula. And again, you know, I exited in 2022, kind of cut all my ties with investors. cause investors are part of the problem in the same way where invested, they just, hey, you know, let's just build the algorithms. They want to monetize it, right?
Yeah. How do I get money back? Yep. And selling data, that's the number one play, right? So, exactly. Exactly. And, and it's a shame because like, as I said, the, the, you know, I sometimes can, you know, people think that I attack investors, but I don't really attack them. I just don't like their attitude. It's, and I, I now understand that they have to achieve, you know, the ROIs, whatever.
I just don't think that you, you shouldn't not do that at the expense of people's safety. You should not do that at the expense of children's safety, because if it's their children, then they would regret it. But I, I think the fact that it doesn't impact their own, I mean, I hope it wouldn't, I mean, it's a horrible thing that if, you know, your child commits suicide, for example, right?
We've seen a lot of these sort of examples on social media. I mean, you know, I wouldn't wish it on anybody, but because their children or their loved ones don't go through that somehow, it just does, it doesn't cross their mind. There's no empathy. Right. So, Nebula basically was, again, it was the outcome of all of that where myself, my co-founders who thought, okay, we wanted to do two things.
So firstly, we wanted to educate the market about what AI is and what it's not. And most of the time, as I'm sure you're aware, like a lot of people think AI does something that in reality can't actually do. They kind of think of the Hollywood version of ai. We're trying to kind of be, you know, somewhat burst the bubble a little bit.
Say, well actually, yeah, this is Hollywood. This is what happens in the real world, right? And that's fine. Doesn't mean that it's bad. It's still phenomenal. It's still powerful. And then the second thing that we're looking at is how do we make it ethical? Meaning that we prioritize building AI models that are trained to effectively quote unquote, understand.
Empathy, ethics. Safety. So that, for example, if there, if it detects that, say a user is being negative, then it should stop. It should not interact with that user anymore, as an example. Okay? So, there's a lot of techniques that we can play around with, and of course it means that it's more expensive, it takes longer to do.
Hence, DVCS don't like it. So, we thought, okay, we'll do it. I invested my money into it. My co-founders did the same. Its employee owned and we pride ourselves for being one of the very few AI companies out there that have no VCs. We're completely independent. And that independence is an advantage, not just for us, but I think also for the users and our customers.
So that's what nebulize all about and that's how I got into the whole ethical debates. Yeah, that's amazing. When you think about ethics, I think about regulation and there's certainly been battles, discussions some laws there's a difference between what the EU is doing, what the UK is doing, what the US is not doing.
And, and all that leads to for me, a conclusion that. I don't leave it up to the governments to figure it out in the believe. Be the people rising up and talking on a, even in the states here, a local legislature level to get state laws going one and then two what I call alt tech companies like yours who have the right theories and philosophy and mission that is ethically minded.
Having, having companies like your Ri yours rise up and compete directly with big tech, I think that's what's going to save us. What are your feelings about the status of laws, governance, these countries and how you fit into that? No, absolutely. That's a great question. I think you're right.
It, it is a little bit messy. I mean, there isn't any. Consensus globally in terms of what even ethics means. And, and I think the problem is that because everything's been politicized, which is a very big problem, yeah. People kind of look at ethics and DEI and they think, okay, solve bad in those boxes sticking exercise, et cetera.
But for me, I, I try to, I always try to simplify. I think the problem that we've done well, rather the, the problem that we had at the tech sector for a very long time, that we've pretty much failed to communicate with the general public in a way that makes sense to them. We try, we try to be way too technical, et cetera.
Whereas I try to be non-technical as much as I can. Yeah. And for me, when it comes to ethics, it's simple. It's going back to the point I made earlier. It's like, what, how do we make sure that the technology is safe? That's it. And I don't think you need to be politically affiliated to any party or anything like that to say, Hey, how, how do we make sure that the kids are safe?
How do we make sure that AI doesn't give you results that could potentially destroy your company's reputation? That's all ethics. It's nothing extravagant. It's, it's sort of common sense, right? And so, hence, I think in fact, the reason why I decided to start my own podcast is, is to do exactly that. I've, I try to tell people that we don't need to wait for regulations to do the right thing.
I. I mean, you don't, you don't need the rule that says that I don't know, committing murder is bad. Just how about if you just don't do it, right? Yes. It's good to have regulations. cause regulations can Common sense. Yeah, it's common sense. But regulations, well, regulations can be useful. And believe me as an entrepreneur, I'm not a fan of regulations.
I'm telling you; it's like they can be quite ridiculous in many cases, especially in Europe and in the uk. But the one thing that I do like about the European approach, I know it's not me being biased cause I'm in Europe, but Europe always prioritize that idea of safety, right? People, the public safety, what is it safe for the public?
Is it ethical? Hence, you know, they introduce GDPR. Again, it's not perfect, but at least they're doing something to make sure that people don't do what my investors did in the past. Be brazen about, Hey, let's just share people's data, right? I mean, it, it, it, it creates at least a, a map on which we can follow the, the path towards ensuring that things are safe.
Right? But of course, I guess the debate is about how do you enforce the rules? How do you enforce the regulations? I mean, that's where, you know, the problem can get a little bit complicated. And for example, I'm not particularly a big fan. The EUs approach in suggesting that larger models should be regulated more than smaller models.
cause from my point of view, as somebody that built AI models, you don't need large models to create havoc. Cambridge Analytica didn't use large language models or any large model to create their havoc. You can really, tiny amount of AI can destroy a lot of businesses in an instance. So, so, so things like that, you know, the technical side of it, they need to be revised, I presume.
But my general message is that doing the right thing, you know, doesn't require regulations. Just follow the common sense. And from the business perspective, cause most of our, well actually all of our customers are business to business. We, we, we started to engage with governments to advise 'em and so on.
You know, imagine if your company deploys an AI model and it does. What happened? I think one of the delivery companies in the UK. Was it, was it DTP? I can't remember which one, where they deployed a, a chatbot. The chatbot started dissing the company saying, oh, this company's horrible. You know, the, the, the staff here are ridiculous.
And I was like. This is the kind of issue you need to avoid. So, brand protection is a key element here. True, true. Right. It may have been true, but you don't want own may have been true. Yeah, absolutely. Because one of the things like always, the other thing I like to tell people that AI is data in, data out, right?
So, if you've got any anomalies in your data, in the same way that people, obviously the, some elements of this world, they try to tell you that hey, you know, AI can amplify all the good stuff by a hundred million times. Yeah. It can also amplify the bad stuff. If your data is not on point or you've got any sort of dark spots in your data, it'll amplify that a hundred million times.
So, as a business, dangerous than likely, there's more bad data than good data in terms of Exactly, exactly. Hate and racism and all that. Oh, yeah. Yeah. I mean, prevalent going unchecked in social media. Right. Exactly. You know, that's the thing. It's like social media, no governance. And here we are, I don't know what, 15 years into it.
And you know, you see the damage, but yet those same leaders of social media are the same leaders in ai. And of course they don't want regulations and. And, you know, damage will happen because these big tech leaders just don't care. No, I mean, that's, that's the sad reality. They don't care. And I guess the question is why, I mean, what I mean is it, is it purely ideological?
I mean, I see all people go on things about, you know they're just being evil or whatever. I mean, I, I don't like to get into these sorts of things, but for me. I like to be practical about it. I mean, there's obviously a reason they don't want regulation, that they try to even, you know, destroy current regulations.
I mean, I, I get people on LinkedIn, which I'm sure you, you get as well, where they want to push for deregulation in that regulation Stifles innovation, I. Which to me, this is like one of the dumbest things I've ever heard in my life because nothing stifles innovation because if something stifles innovation, that means it's not innovation, is it?
The whole point of innovation is that you overcome whatever challenges that you, you see, right? Well, it's creativity. It's creativity, which is human. Absolutely. No, there's no, there's no borders there. Un, un. Until you build a technology that creates certain factors that have to be considered, like, well, maybe we should regulate this because we didn't think about this when we had that spark of in, you know, inspiration, right?
Mm-hmm. That point of innovation where we want to go down this path. You've been down these paths, right? And you eventually switch gears. But it didn't stop you from ideating and innovating and creating, right? I got to have to say what, what stops Innovation is two things. Lack of money if you, if you don't invest in it and lack of imagination.
That's it. Yes. Right. Yeah. Lack of imagination. So that's interesting. So, ethics applies to so many different verticals certainly in the IP and copyright arena. Music is a category. Do you have any thoughts about ethics in music and ai? Well, it, it's, it's at the crux of the, the whole copyright debate, isn't it?
I mean, I'm very, I'm very upset with our government here in the UK that they seem to be considering reducing our copyright laws. And I always like to remind people that the uk, and again, I'm being biased, but we, we have the, the best creative industry in the world. I mean, if you look at some of the, the Greatest games, movies music, it all came from the uk.
We never had copyright problems. Right. So, I, so again, when people go on about that, you know, copyright stifles innovation, well, how did we build the greatest creative industry in the world with the current copyright laws? Obviously, it didn't stifle anything. But again, it's, it's the, you know, it feels like as if they're listening to the VC class, and I know in fact, one, one of the key advisors of Tama, the Prime Minister is a vc.
Which to me, I think that's a very bad idea. You should not have a VC as an advisor. Again, no offense to VCs, but if you're looking at the technical side of things, you need to bring in actual entrepreneurs or dev or innovators in the world of tech where they'll tell you that actually, you know, forget about copyright.
That's not the problem. The problem is the data. The problem is how do you compensate artists and musicians, for example, if you are to use their work to train the data. I mean, in fact, I always like, again, I share this on LinkedIn a lot, but for instance, back in Google Campus, there was this sort of like a, like an etiquette where if you train models like I speak even for my company at the time, so because we, it was science, we would use proprietary or closed journals, for example, to train the model.
And the idea was that we want to train it inside privately to see if it works and test it, et cetera, but we would never put it publicly unless we have some kind of agreement with a copyright owner. Again, it's the right thing to do. It's not about, you know, us being scared. It's just say, Hey, it's, it's somebody.
If, if I'm taking someone's book, I'm not going to just going to take someone's book and train my model with it, and then go out with it and ignore the, the author. I mean, there was like an etiquette. You just don't do that. So, so the fact that I'm seeing now suddenly some of these various same people that were at Google Campus now suddenly going on about, oh, it's available on the open web.
I'm like, yeah, that doesn't mean anything. It's, it's on the open web because the owner allows you to access it on the open way, that doesn't mean that you, you basically have the right to replicate it. And I think, you know, there's a bit of misinformation going on in here, the way they try to mis inform people.
But I had I've been to quite a few events where I had singers literally, you know, asking me, although I'm not a lawyer, but they're asking me, Tim, I would love to put my music out there, but I'm scared because I don't want these, I mean, they call them ghouls. I don't want these GULs to take my music and train their AI models with my voice and my music.
What do I do? My response is like, Hey, go get yourself a really good IP lawyer. I mean, I've got a couple, actually, both of them came to my podcast to explain all of these things. But yeah, I mean, it is, it is just, you know, that was part of the etiquette that we had over the years. So, I get pretty annoyed when I see some of my counterparts basically decided to, you know what?
Let's just forget the etiquette and just, you know, prioritize this a GI dream, which literally it's complete utter nonsense, but hey. It gets the VC cash, right? So, let's let forget the ethics and copyright. Let's take, let's take people's work and let's try and convince governments that actually copyright is a problem, even though it's never been like I just explained.
So yeah, it's, it's, it's a, it's a very it's a pretty aggressive data grab campaign right now. They try to misinform the public, and we've seen that with Doge as well. Not to be political, but again, I just call it, call out what I see. I mean, they, they are accessing data that no company should access. No company should access your social security data or your healthcare data.
I mean, you know, all of that's been protected for a reason and now these companies are trying to. Break all of these, all of these norms. And in the name of making a GI, yeah, A lofty goal. Yeah. Etiquette went out the window, quaint idea. And there is. Misinformation. Well, certainly in politics, but even in the tech community.
Yep. Where people are confused by a lot of the terminology that's thrown out. I mean, ban teed about and so that leaves a confused public creates fear. Yeah. And creates inaction when people should be using their voices and choices to rise up against the, what I call the tyranny of big tech to protect their data, right, let alone be in support of what is really empowering technology that can do amazing things in science and medicine and, you know, marketing the low hanging fruit.
But at the end of the day the misinformation and politicization and geopolitics of AI is just really creating this cloud of confusion. And you, you like myself, I believe that conversations like we're having here right now. Hopefully can illuminate and educate to help people get over the hump.
You know, I don't think I'm, I'm speaking to Big Tech through my podcast. I'm speaking to people and the users and, and, you know, the AI bubble of AI adoption, those enthusiasts who have learned to really love the technology, but yet there's so many choices out there. I believe they, they.
Ethical platforms versus just automatically go to the big ones. And I think those are rising. I think there's going to be, there are more startups happening who have the same lens as you with the, the, through the eye of ethics, creating a platform that people can trust. Right? Trust is a. And, and in adoption, and I think a lot of the misinformation has built distrust.
Yeah, absolutely. In fact, I, I am doubling down on the statements that I've been making for years that I think the future success of AI post the hype will be trust. I don't think people will care about the size of your language model. You know how many parameters it has. No one cares about that. I mean, the one thing I started doing and, and I know I encourage, you know, probably yourself, but even, you know, the listeners to try and get out of the tech bubble.
cause of course London is a bubble in the uk. But then the tech, you know, the silicon roundabout is the bubble within that bubble. So, what I've been doing, which kind of one of the reasons why I started doing the podcast is like, you know, before I start any new venture, I always like to go out there and ask people questions and explore.
But I've actually started doing this more regularly where I go to event, and I speak at events outside London. So, I go to, you know, smaller events, local events in various cities and towns, universities, even though I go speak at school sometimes, and, and I'm telling you, man, they, they're not engaged. It's not the same.
So, so we've got the enthusiasts in our bubble. Mm-hmm. You go out of the bubble; people don't really care. Or if, or some of them who do they, like you said, they are quite confused. They don’t know what, what to, you know, what to believe. And I feel, you know, the onus is on us, like you said, to actually try and educate people.
So, I don't tell people don't use chat GPT. In fact, I was in an event recently. Well, ha actually there were a couple of people there actually that listened to the podcast, which just, oh, that's cool. And they were saying, oh, Tim, so are you telling us that we should not use ai? And I said, no, no, no. I'm not saying that at all.
I'm saying you should use ai. In fact, I've been saying since 2019, there's, I think, I'm sure there's, there's a tweet. I was in an event post we supposed POS launch of Nebula like it was January 2019. I like a couple of weeks after we launched. It was at a university in fact, and I said at the time that there's a big AI wave coming and you absolutely need to be ready for it, and you absolutely need to take advantage of it because if you don't, the way things evolve in ai, they evolve fast and before you realize it, your company.
Will disappear, it'll become irrelevant in the market if your competitors deploy AI in the right way. And you don't, you're out. You're, you're right. Forget it. Right? And, and I still say that today, but what I also tell people that because of this hype, it's, it's quite dangerous for your company to follow this misinformation that could literally destroy your company's reputation, as I explained earlier.
Right. However, chat GPT is great for a non-technical person to try and understand what AI looks like because not that long ago, one of the things I used to do when I go to any event, I would ask the audience, when I say the term ai, what's the first thing that comes to your mind? Almost 99% of the time they would say Terminator.
It was always the case. Right? And I was always dystopia, sorry, t2 dystopia. Yeah, absolutely, absolutely. But that was, but it's it. This is how they relate to ai. It's all about robots in Hollywood. Right. So that was the point where I was explaining to them that actually robotics and AI are actually two very separate things.
You don't need a physical being or vehicle to deliver ai. AI can be injected into anything and everything. And it's been in fact part of your life for the past 20 years. I know. cause I was one of the people to bring it. I'm sorry, but that's what we did. Right. So yeah. So, I tell people that today, when I say ai, they think chat, GPT.
And I actually think it's good that people have been exposed to Chad GPT cause now finally they know that AI has nothing to do with robots. You don't need a Terminator anymore. But this is where I get annoyed with the likes of Elon Musk and Sam Altman and my counterparts that they now, like, you know, I used to try to teach people to take Hollywood out of their definition of ai.
But now the tech bros, they're trying to bring Hollywood definition into ai. Now that people understand what AI actually looks like, this is the problem. So hence they're a bit confused. So, so I try to tell people, look, you know, if you've been following me for a long time, you know that I've been saying exactly the same thing nonstop.
Like pretty consistently the same thing applies. Just ignore what the tech bros are telling you. They're trying to give you that Hollywood fantasy, but the reality is. It's, it's, it's much more practical. It's really powerful. It's useful, but it's also dangerous if you use it irresponsibly. Start with chat.
GPT. You can do it for free. Claude as well. Fantastic. You play around with it, you understand how it works, but do not share your data with them. Never share your data right. But you can deploy it privately if, if you use Google Cloud or Azure or anything like that, you know, they've got these models available on your private server, you know, if you need our help, we can help you.
Or if you've got it departments, let 'em do it for you, et cetera. So, you know, for me, I think it's, it's you know, that's why I try not to be negative about AI because it, it, like, you know, generally speaking of putting the tech bros aside. It is, it's a phenomenal human evolution. I believe I the way we put it together.
It's remarkable. It's state of the art beauty. I mean, I would say that. Yeah. Well, that's good. I mean, I am, I'm an AI optimist, but I, I am realistic, comes to exposing, you know, the good and bad side. Yes. And I think people need to know those things you speak. And by the way, I have not seen a robot in the two years that I've been using these platforms, so I don't know why robots keep getting you know, brought into the conversation.
Actually, and I don't know if you saw the post, it's a fairly controversial, but I said that human noise are probably the most useless robots ever. They're nice stories, but the whole point of robotics is that you build a robot that can do things that humans can never do. Right? And why? And my point is, why should they look like a human?
Exactly why is it always personified as a, as a human? It could be another thing. And again, it takes people with imagination, right? Yeah. To think outside a box that, you know, and it's kind of happening in robotics, you know, where a robotic dog could patrol a border with a machine gun on. Its back better than a human.
And because it's lower profile and can roll and do all kinds of things where a human personified robot. Yeah. Like, oh, there's the robot, I'm going to shoot it. Or, you know. So again, but that's a robot strategy, which, you know, is really separated from ai, like you say. Where is the, where is what I, I like to describe the Hollywood ification.
AI again, isn't it? So, this is, you know, they tried to bring Hollywood back into it, but it's yeah, black, black mirror that was mirror right. With the dog. But yeah, it's and I think Black Mirror, a lot of its filmed in England because a lot ironic anyway. Exactly. Yeah. I agree with you that, you know, England is a center of creativity.
Hello? The Beatles, right? Mm-hmm. The stones. Yeah. Think about just musically Exactly. The. You know, it's like amazing. The you know, another aspect of AI is where we're going where, where will we be in in 10 years if we can, you know, get over the politics that are. You know, exacerbating negative issues.
But yet science does march on even though it's being repressed in the United States. Where do you see us in 10 years? I think if you asked me this question a year ago, I would've had a very different answer. A year ago. I would've said that actually, I think we, we would've had a major scientific endeavor with ai.
I, I, I would've said that we would've had more companies like DeepMind emerging over, you know, we would've had more effort in investing in AI that can help us understand, you know, science, healthcare, the environment. I mean, because the way I see it, I, again, I talk a lot about this on LinkedIn and the podcast, that the way I see it's, from my experience that AI has been, is part of what I see as the five key pillars of society, and that's our identity, the environment, education, healthcare, and finance.
And, you know, AI has been part of this for years and it can, you know, these five things could improve beautifully and exponentially with ai. However, I never thought. Politics would become part of this equation. Right. And, and the thing that worries me, that politics has impact on those five things as well.
You know, it's you know, they, they use these five things, politicians everywhere, all over the world. They use these five key areas as a political football. Yeah. And now that you've mixed AI into that, unfortunately, I, I'm more cynical. I'm not as positive. I'm really, really cynical about it. For me, I feel that the only solution for us to achieve any kind of positivity is for this hype to explode, which means it'll cost us a lot of money.
And interestingly, I believe Trump is going to facilitate it further. Now we've got these tariffs. I think it's going to happen. I think we're going to experience a major financial calamity, probably worse than what we've had in 2008. And I've been, I, I experienced that. It killed my company at the time, so it was Google and the financial crash.
And I feel there needs to be a major reset of some sort politically, economically, because like it seems that. Politicians have, have, have basically completely decided to jump in headfirst into the AI hype, not AI reality. You know, with all the deregulation, et cetera. I think we're going to go through a really messy period for the next, within the next five years, and, and the only way we can get ourselves out of it.
Is for, to have a financial catastrophe because that's how we seem to, as a, as a global society, that's how we seem to learn lessons, don't we? We don't, we have to wait for something to bad to happen for us to then realize like, yeah, aha, we, we should have probably done that differently. You know? And I think this is why I feel that trust will be the key issue.
Human-centric models will be the key issue. Like I think people would want humanity back. I think that would be the thing. Yeah. So, there would be, there'll be a major sort of probably societal self-reflection in terms of thinking, okay, we've tried this AI hype, we went through all of that. We've lost ridiculous amount of money, our economies in disaster.
Okay. What do we do now? Okay. Maybe we need to kind of go back to the drawing board. Maybe we need to put people first again. Right. So that's my prediction. I feel. I, I think, sorry, from, from your lips to God's ears. Well, I mean, I even created a company that basically we're, we're, we're kind of betting on that.
That's where I feel that we, we can play a big role, but of course, ideally what I, what I'm trying to do right now, of course, I don't want that to happen. You know, the main reason I am, I'm doing what I'm doing and, you know you know, I, I tell people I would've been a hell of a lot richer than what I.
But I am doing what I'm doing because I want to prevent the catastrophe. Right. Yeah. Interest. Mm-hmm. But I honestly can't see it if, if, if the catastrophe is inevitable, it could be the reset that's required to course, correct. Yeah, no, absolutely. I mean, what I, what I can say pretty certain that we're not going to have a GI in five years.
Because a GI is just, again, it's one a GI is kind of like, again, that's a concept. It's as useless as humanoids. It's like, why do you need a GI, I just don't understand it as a scientific endeavor. Kind of like with humanoids. Yes, it's interesting, you know, just as you know, to play with it and kind of tests, whatever, but if we're talking about real world applications, it adds no value.
I mean, what, what do you need a GI for? What do you need robots for? Well, he, yeah. Have to send to Mars. I believe in sending robots to Mars and not people, actually, I don't even recommend going to Mars. We have enough problems down here on Earth. Yeah. Well, again, that's another nonsense for me as a scientist this time as a, as a, as a tech entrepreneur, but it's like we, they're going about, you know, we want this, you know, terraforming MAs.
I'm like, well, if we can do that, why, why not bring, you know, revive the Saharan Desert, for example. How about that? Yeah. Right. Yeah. I'm sure to be a lot cheaper. There's still oxygen there be you know, quite the ha the lush, tropical paradise between the Tigris and Euphrates back, back in the day Exactly.
Into a dozen Exactly. Is prophetic, right? Mm-hmm. But how do you reverse it? Right? And that's where I, I love conversations around solar punk and a utopian future, which realizes that. Yeah. How, how do you use technology and green e you know, energy, which has been politicized to actually save the world and make a better world?
Yeah, and, and let, let's imagine that let's put Hollywood's emphasis on that instead of dystopia. There's been more books written about dystopia than Utopia and that be reversed, right? It's like, it's just a matter of, get rid of the titillation of evil and destruction and put it towards reconstruction and goodness and humanity.
And unless we do that as creatives and as people, we are facing a dystopian future. Yeah. That collapse is imminent in whatever respect, and to whatever degree nation by nation, if not globally. But again. It's the people that need to rise and use their voices and choices to, to choose what to read, choose who to support and, and what to develop, how to use that out of the box imagination to imagine a better world through the use of AI and whatever technology.
Yeah. We need to get politics out of it somehow. I mean, I think this is really the key. I agree. But you know we're going, we're growing, we're going through some things here in America that are just really, we've been through it. We, we, we had Brexit. This is like it's the same people. I mean, I even talked about I get on my po I tried; I don't want to plug by podcast too much, but it's just to highlight to people that, you know, many of us have been, have been saying this for a long time.
Like I even compared. I think it was in probably the August, was it August 2024, where I was talking about what if Project 2025 happens? This is literally what we had here with Brexit and List Trust, because it's the same people. So, the think tanks, the same think tanks, and whoever that funds these think tanks are exactly the same people behind Project 2025.
And I did say that if that happens in the us, that's basically Brexit 2.0, but American style, so that means it's going to be bigger, harder because I mean they ransacked. I mean what, what I find really annoying, and it really does make me angry sometimes when I see the same people say, oh look at the state of the uk, look how the UK is so rundown and stuff.
And I'm like, you guys ransacked the uk, it was you. The was fine. Oh, not fine. I mean, no country is perfect, but pre-Brexit, I mean, yeah. That's another thing I try to remind PE-people like London, we were the Silicon Valley of Europe, right? Yeah. And in fact, one, I was campaigning and pushing that I wanted, I wanted London to be the Silicon Valley of the world.
cause we had the perfect geographic location. We had, everything was happening, but then Brexit came in and just destroyed all of that. It. That's why I get really annoyed with the current government when they say, oh, we won't make London a tech hub. I'm like, we were the tech hub. No, we were the tech hub, you know?
And saddens me that I see the same thing is happening in America. It's the same people, so they ransacked our country. Now I want to red sack your country and it's the same people. And I think there's an evil component there, though. You don't like to talk about evil, but I, I think there's an evil there that really can't be, oh, greed.
Maybe it's just greed, greed, greed. Yeah. Greed. But greed is evil. So, greed is evil. True. Yeah. Yeah. Yeah. Anyway, well, I, I want to end on an up note, and I want you to promote your company and tell us how to find you, tell our viewers how to find you and what, are, where are you speaking at next?
And you know, how, how can they follow you? No, I appreciate that. So anyway, well if, if the listeners enjoyed that sort of conversation, I have that on my podcast called the CEO retorts. And the idea behind it is that I try to bring a. Experts in their fields from all over the world to talk about the issues that I talked about.
So, when I, for example, I mentioned the, the child safety issue. So, I brought a child safety expert from Canada, Professor Sarah Grimes, for example. She's a world-renowned expert in that. So, we had a conversation about that. Talked about copyright. I have this great guy Ben Mailing who was well, is he is the IP lawyer in London.
He's the European leading IP lawyer with PhD in math’s. So, the guy knows AI and IP law. And I was like, perfect. I need you to, to be on the podcast. I always try to bring these sorts of conversations. No BS allowed, you know, straight to the point. We don't pull any punches. So, I, I do that. So, you can check this out, see or retort.com.
And if you're interested, my company is Nebulize, that's N-E-B-U-L i.com. To see what we up to. I mean, we do a lot of r and d work, so we're not your sort of typical agency. We don't sort of design apps or whatever. We're, we're a group of data scientists. And what we tend to do, we help companies with the data science, data modeling, maybe even sometimes building AI models specifically for enterprises.
And we kind of work alongside their CIOs and IT departments. cause these are two very different things. It, and data science is separate. So that's what my company does in terms of speaking, I'm, I'm literally everywhere. Like I said, I try to, I try to avoid the big events. I've done all the big events in London or whatever.
I feel like, nah, I need to speak to people outside the bubble, so I go to universities smaller local business events, et cetera. So yeah, if people want me to come and sort of talk to your local businesses, yeah, get in touch, I'll be happy to help. Okay. Awesome. All right, well listen, thanks Tim, and thanks to all of you for tuning in and catch more of our Realm IQ sessions on your favorite podcast platforms.
Please follow and smash that subscribe button. You can also follow us on TikTok, LinkedIn, and Blue Sky. Thanks a lot, Tim. Have a great day and we'll catch up. Yeah, thanks Kurt. Thank you, listeners.