RealmIQ: SESSIONS

RealmIQ: SESSIONS with Michael Garfield

Curt Doty Season 2 Episode 7

Send us a text

Welcome to Season 2 of RealmIQ: Sessions where we talked with Michael Garfield. Artist, musician, AI Enthusiast, Philosopher, connector and fellow podcaster.

Main Topics Discussed

  1. Michael Garfield's Background and Work:
    • Garfield's experience with the Santa Fe Institute and complex systems science.
    • His involvement with Google Glass as a beta tester and its impact on his worldview.
    • Current projects including books, AI non-profit work, and smart glasses.
  2. Impact of AI on Society:
    • The implications of AI and wearable tech on personal and social interactions.
    • Historical context of technological evolution and its cyclical nature.
    • Challenges of information overload and trust in science and technology.
  3. Future of AI and Society:
    • The role of AI in amplifying human decision-making rather than replacing it.
    • The potential of new job roles like Chief Philosophical Officer.
    • The importance of creative and philosophical thinking in navigating AI's future.
  4. Regulation and Imagination:
    • The need for better regulation across different levels of society.
    • The role of imagination and creativity in shaping a positive future with AI.
    • Comparison of AI regulation to the regulation of social media and cryptocurrency.
  5. Educational and Ethical Considerations:
    • The importance of education and evangelism in responsible AI use.
    • Upcoming courses on embodied ethics in the age of AI.
    • The balance between capacity enhancement and thoughtful decision-making.

Quotes from the Speakers

  1. Michael Garfield:
    • "I just got off five years of doing science communication for the Santa Fe Institute, thinking about the general principles underlying the dynamics of adaptive and evolving systems."
    • "Wearing a computer on your face changes the way that you think, and it changes the way that you interact with the world around you."
    • "We are collectively struggling as a species right now to adapt to an enormous spike in the production of new information made available to us through our various successes in science and technology."
    • "We are going to get more and more comfortable with fundamental uncertainty and find better ways to weave processes that we don't understand like human or machine intuition."
  2. Curt Doty:
    • "The promise of web three was a DAO, digital autonomous organization, which is really, you can be your own country with your own economy and your own relationships in this new world."
    • "The real question is, but is all that efficiency driving us? And it's a completely different mindset when you're using the tools not to be efficient, but actually improve learning."
    • "We have to focus on imagination. Imagine scenarios, imagine worlds, imagine new economies and how you leverage this technology to solve the problems of today and create a new world tomorrow."

Receive our weekly newsletter: https://dotyc.substack.com/?r=i9c4x&u...

Sign up for one of our workshops: https://www.realmiq.com/workshops

Are you an AI Founder? Learn about our AIccelerator: https://www.realmiq.com/startups

LinkedIn: https://www.linkedin.com/in/curtdoty/

And, here's the link to my calendar to schedule a personalized AI Training session:

One on One AI Coaching with Curt Doty

Hi, I'm Curt Doty with RealmIQ. This is our podcast, Realm IQ Sessions, where we talk about everything AI with international AI leaders, and in today's case, right in our own backyard. So please give us a follow or subscribe. Today's guest is Michael Garfield, artist, musician, AI enthusiast, philosopher, connector, and fellow podcaster.

 

And he is local in my hometown, Santa Fe, which is refreshing to see someone of his caliber here. Michael, excited to have you. You have such a great worldview and philosophy around AI, it's disruption and opportunity. Why don't you share with us where you're at with AI and, and what have been your challenges?

 

Sure. Well, I mean, right now, uh, I just got off five years of Doing science communication for the Santa Fe Institute, they lead in complex systems science, so thinking about the general principles underlying the dynamics of adaptive and evolving systems, right?  And that was a crash course for me after years and years of thinking about human technology co evolution, coming out of a background in academic interest in major evolutionary transitions, like the origins of multicellular life and the way that studying those transitions can inform the way that we think about and strategize our increasingly symbiotic relationship with various tech layers. 

 

And, uh, back in 2013, I was a beta tester for Google glass. So, I got to do a lot of really interesting, weird world first performance stuff with that. That definitely informed one thing I hope we talk about today, which is the. The way that wearing a computer on your face changes the way that you think, and it changes the way that you interact with the world around you, not just the people, but the rest of everything. 

 

And so, now I'm Working on a couple books, I'm contracting for a, uh, a new AI nonprofit, did some background research on innovation and the history and future of computing from Mozilla last fall.  And yeah, I'm just, I'm, I'm in it, I'm actually about to press. The button on pre ordering a pair of prescription smart glasses.

 

My first pair of prescription lenses will have a GPT built into them. So that's fantastic. I want to hear a review about that. We'll have you back on the podcast to talk about that experience. Uh, you, you have such an interesting, perspective between science and philosophy and art and design.  And now layer AI into that.

 

Where are we headed as a society? You talk about some of the implications of wearing, AR glasses, and what that means in personal, interpersonal relationships or worldview.  What do you see happening in this convergence?  Well, let me give you a kind of a weird scenario here first, which I think most people listening to this are either completely on, I mean, they're, they're probably straddling  the smartphone era, , most, most I imagine are old enough to remember  what it was like to remember your friend's phone numbers.

 

Yeah, right. And now we don't have to. When I was wearing glass in 2013, it occurred to me like they actually put a. a kibosh on facial recognition stuff, because that particular device came out right at the same time as the Edward Snowden revelations about, about data surveillance.  But there were people that were actively developing facial recognition software for people on the autism spectrum so that they could, they could understand other people's subtle facial emotional cues.

 

And I thought that was one of the more interesting use cases for, something like wearing a camera around. And, and now everybody has, like, you're used to seeing facial recognition at the grocery checkout. Like we live in a different world  and, all of this is a snapshot of this larger thing, which is going on, which is that we are collectively struggling as a species right now  to adapt  to an enormous spike in the In the production of new information made available to us through our various successes in science and technology, one of one of the more interesting talks that I've ever seen was given by Carnegie Mellon professor Simon today a few years ago, where he looked at the  history of. 

 

The proceedings of the Royal Society, like the oldest scientific journal in the world. And he, he looked, he, he wrote an algorithm to analyze the relationships as evinced in, in the papers themselves cataloged in the journal between different fields of knowledge. And what he found was that about every 150 years, we complete a cycle Where we have some profound breakthrough, like electromagnetism  and fields that were previously considered separate  become joined by a new conciliate model, so what we thought were two separate, electricity and magnetism.

 

Suddenly it's one thing and that unlocks a whole new set of opportunities in science and technology. And then we are fat on the spoils of those successes for a while, until we get to a point where we need, uh, we’ve, we've proliferated a whole bunch of new fields, be they, , academic or, or technological,  and then we need a new framework to unify those.

 

And so,, within about 150 years, we have about a hundred years of upslope where. We're just riding high on this dream of unification, and then we get to a point like the point where we've been in for the last,, couple decades, and it's really, really pronounced now.  Where it’s not clear at all to us how we are going to make sense of all of this.

 

 , it's not clear how we're going to bring everything together into a single unified understanding. And so these, these kinds of things can be traced all the way back to even before the origins of what we think of as human civilization, like,  , arguably  The evolution of sentences and syntactic language  is another one of these instances where the ability to communicate with each other in in single word utterances  proved to be such a profound innovation that we Generated all of this new social complexity in our primate groups.

 

And then suddenly we have more things to talk about than we know how to talk about them, then more things to talk about. Then we have the memory available to remember a word for every relevant. Situation. And so suddenly you get a new kind of recombinant principle, the sentence that allows us to remember fewer words, but then join them together and come up with a kind of a force multiplier.

 

And so, in the way I look at AI is kind of within this sort of.  Thinking it's about addressing information scaling. It's about us having gotten to a point now, like, I'm sure you've heard,  , I, I know you worked in like blockchain web three  stuff. And the way that people in that space talk about  that kind of, uh, FinTech as a scalability of a trust layer where it's like,  , we, we have names because we lived in groups that were small enough. 

 

We could remember everybody, and we could track reciprocal relationships.  And then suddenly, we live in societies where we need double ledger accounting and, and then it just, and more so and more so. And now we're at a point where, within a few years, we may be kind of regularly interacting with people upon, with, with whom we are engaged in kind of existentially important trust relationships, where But whose names we don't even bother to remember because the, the, our cognition continues to spill out into these, these layers that we build around us to augment it.

 

So, yeah, that's the long and the short of it.  Well, articulated, the promise of web three was a Dow, right. Digital autonomous organization, which is really, you can be your own country.  Right? With your own economy and your own relationships in this, this new world.  And so, yeah, mind blowing stuff hasn't yet really lived up to the promise.

 

And that's some of my frustration with web three. No one knew really what it was. And so adoption wasn't there. However, with AI adoption and utility was immediate and, and the technology advances were exponential. Moore's law does not. It does no longer apply and you talk about information overloads and the ability of the human brain to be able to process what is going on and How fast the machines are learning machine learning and it's it is mind blowing  And I feel we are racing towards this.

 

I don't know a cataclysm some type of event That’s in your hundred and fifty year cycle you talk about which I think is fascinating  Of  where is this leading? Lots of societal challenges, the social economic impact.  Uh, lost jobs, but yet as technology has always enabled new, new sectors, right, new technologies that create more jobs that we haven't even figured out what they are yet. 

 

So I I'm an optimist on that side, but I think you're  on a scientific worldview is interesting. To, to hear and learn about because you can just ride this technology wave and this creative wave of gen AI and have fun doing it, but really, where is it leading  us to both as a  creative,  a designer artist. 

 

we share a lot of creative interests. You're probably more on the science and philosophy side than myself, but I, I totally dig that and I think that's really interesting because I think we look to  philosophy to help guide us as a society because other ancient philosophers were dealing with  cultural disruption in their time and what are the words of wisdom that  we can use to help guide us in this time? 

 

 , there's an information overload. There's certainly mistrust of the media.  And so how do we guide ourselves?  So, your question is a question that I spent some time unpacking in an essay I wrote during COVID called We Will Fight Networks. We will fight diseases of our networks by realizing we are networks.

 

And, , the point of that was to look at.  The way that COVID containment policy had struggled to  sort of meet the pace at which new scientific information was being produced by epidemiologists. And that we ended up with these really profound and dangerous trust failures between experts and non-experts of all different kinds, right?

 

Like we're not just talking about like climate experts versus non experts or disease experts versus non experts like The scale of social interaction, again, that we,  we depend on to maintain a coherent industrial fabric and social fabric is so big now that we really aren't it weirdly back in the position of having to take a lot of the claims upon which we base the decisions we make in our daily lives on faith, like there's just not enough time in the world for. 

 

Each of us to read everything we need to read in order to make an informed opinion about something. And so this question about when, when is it  important for each of us to be informed and when,  , and how can we delegate. Decision making is, that question points us into a kind of philosophical abyss, which we've already been living.

 

We've been living with since the beginning in some way or another, which is that we're made out of relationships.   that the, that what I, what you and I take to be our individual selves can equally well be understood as nested. Collective of collectives living within some sort of social superorganism that is itself embedded within an even larger kind of symbiotic relationship between humans and technologies and non-human organisms.

 

And so a big part of what I think, if you go back to like ancient Greece and you've got the Delphic Oracle, right?  , and like the Athenian  decision makers recognize the utility of, of seers, of people who think differently,  people who, who are capable of like lateral thinking and kind of intuitive insight.

 

In their own steering policy. And now something like that is going on with AI, where I actually think that the more, in spite of the fact that most of the business attention on AI right now is an effort to get it to be, to perform in a kind of reliable and understandable. Way,  like it's these, these are systems that we are probably mistakenly giving keys to very, very life critical systems rather than using to augment our own decision making.

 

And so, , like a big part of what I talk about on future fossils and in work elsewhere  is this question about how can we use a I broadly speaking computing to. Amplify and support decision making rather than to replace it. like we want to make this easy. We want to outboard decision making, but the more that we do that, the less resilient our society has become.

 

the more we base the people, we, the more we kind of hang out with people like us, cause it's easy to kind of trust somebody like you. Uh, but society like more and more homogeneous as society becomes, The, the less capable it becomes at adapting to new kinds of challenges. And so some, like this, this,  where is this heading question I think  is twofold.

 

One is that we’re going to start seeing  new C suite position. My buddy, Peter Lindbergh, who used to run a forum called the Stoa was just offered a gig as a chief philosophical officer.  And I think that like, have it, installing something like a, a kind of contemporary secular Delphic Oracle inside the decision making apparatus of an organization is going to become more and more prominent. 

 

But the other piece is that each of us will, like the way that Marshall McLuhan talked about digital technologies, making institutions out of individuals,  like the rock star,  that each of us will also.  Start to fold AI based kind of intuitive activity into our own daily stuff.

 

Like, ,  it's already the case with newsfeed algorithms. Like,  Saying, okay, like what should I be paying attention to? And, and so like the product that my buddy, I had a future fossils listener, a fan of the show, Van Betower,  who unbeknownst to me built this thing that I've been wanting to have for years, which is a language model built on.

 

The hundreds of episodes I've recorded from my own show that you can query as like a, like a kind of a Wikipedia thing. Like it will synthesize an answer to your question out of,  like a breakdown of every conversation I've ever had on record,  and then it will actually provide like a paragraph length summary with linked citations that take you back directly to the primary source material.

 

So, this is like, this is, this is more trustworthy. Then a purely generative response, right? Like it's not just hallucinating answers. Certainly, better than Google.  Right? Like you can actually go back and like, you can ask it as I did,  what is the future of selfhood in a, in a S the age of computing, and it'll give you a few sentences and like a dozen clips from the show that you can go back and be like, okay, that is actually,  the, the machine is interpreting this correctly, so you, what you get, um,  Like, like you're saying it's better than Google because, if you ask Gemini and then, Google's trying to hedge its bets and Gemini,  they're like, this is experimental.

 

You should probably back everything up with a search, but, so like, I think what we're, what we're going to see is something like. My, my buddy, Kevin Womack, who's an engineer here in Santa Fe and I love talking about Star Trek and like a lot of the stuff that we're seeing now is the fruition of this sort of Star Trek computer dream that was seeded in the 1960s, 70s and 80s and the Star Trek bridge computer gives you one answer.

 

like that. And so we're, we're operating on a kind of a flawed idea of what will be the case, I think, increasingly, um, like,  the punchline of all of this is that we're going to get more and more comfortable with,  , Fundamental uncertainty. And we're going to find better ways to weave processes that we don't understand like human or machine intuition, and also ways of representing uncertainty in the answers that we, we query, that we, we ask for.

 

And like one of the ways of doing that would be for instance, like imagine how different COVID had gone.  If when we, when science journalists reported on some new epidemiology research had said the scientists themselves are only X percent sure about these results rather than the way that we talk about science now, which is, this is the new truth.

 

And so like that. Thank you. there's reporting, then there's the synthesis and then public policy, right?  And it's hard to base public policy based on we're just reporting what we're finding out right now. And we don't know long term implications, but here's where we are. But it was, again, that exponentiality of the pandemic accelerated, perhaps unnecessary policy,  but everyone was winging it.

 

Well, if you can't do nothing, you have to do something. And, and where, where is the trust in, in, in this world of the unknown? I want to get back to the statement you made around.  trusting these responses, like Eden from the Star Trek Bridge.  I love that. I'm a Trekkie too.  So it, it can create a human laziness, right?

 

Where we lose our critical thinking by trusting too much.  And to me, that's the wrong use of the technology.  And there's the huge bandwagon that was leapt upon where essentially software of AI is helping us to become more efficient, right? Do the menial tasks and business love that because, Oh, everyone’s being more productive.

 

But the real question is but is all that efficiency driving.  And, and it's a completely different mindset when you're using the tools not to be efficient, but actually improve  learning  either machine or human  to  learn and actually tackle,  what I call the larger problems of society and climate and science and,  and how, how do we get there?

 

Because the efficiency thing is a race to the bottom of, yeah, well, we've laid everyone off. We're now, 199 percent more efficient and that's good for business. Okay. Well, there's a societal impact from all those layoffs.  And you didn't get smarter in the process. You just saved more money. 

 

And, and it, that's the wrong direction. So, how, how can we, and I think it's, it's creative people. I think it's scientists who are creative and philosophers that can see beyond this immediate application and, and hopefully guide us to a better world of what we're supposed to be using with this technology.

 

Do you, do you agree with all that?  Yeah. I mean, so again, to kind of fold it back into some of the my own biggest inspirations in this thinking there, one of the books that inspired me to start.  Future Fossils podcast was Nicholas Carr's book, the Glass Cage,  which is a direct address of this, this puzzle of like, how, how are we using auto automation? 

 

And, and I like thinking about it kind of generally in that way, beyond whatever we're calling ai. Jaron Lanier has made the point that AI isn't really one technology.  it's like an entire ecosystem of related tools that are each, that are developing at different speeds and based on different assumptions.

 

But, automation broadly, we can tell a story about technology that is, kind of like I was saying earlier is, is really about amplifying and automating human capability.  And so car looks at, starts this book by talking about the way that autopilot has taken over commercial.

 

Airline,  preparation and how most of the big commercial air disasters of the last few decades were for precisely the reason that you just said, which, which is that  pilots had gotten lazy and we're not responding in emergency situations where the, the autopilot made the wrong decision. The human was also making the wrong decision.

 

And. He's like, so Carr,  lays out this whole thing about, he talks to neuroscientists who make,  , who, who state the concern, for instance, that people who rely on turn by turn map instructions are actually not exercising the Parts of their brain, the hippocampal grid cells that we use to orient ourselves in physical space.

 

You get in the car, you're on your phone the whole time you come out somewhere else, you have no idea how you got there. And, and he's the some of the people he spoke to in the course of writing this book that, that they were worried that the first wave, the first generation of people to grow up entirely within this technological regime are going to develop early onset Alzheimer's because  Our entire memory system is built on top of the, the grid cell, how we orient ourselves in space, which is why, real memory experts are people that use like memory palaces, where you're actually spatializing the way that you, where in some sort of virtual environment, like a mind mansion, you sit and you place every memory that you want to keep.

 

And that is a kind of primitive analog way that we can think about using automation for the right reason. And, and the car says that if you want to look at something like video games, right, I am a huge fan of the legend of Zelda games for the Nintendo switch, and these are enormous open world games where you're just sort of plunked into the middle of the action and.

 

You end up unlocking a huge virtual map as you become more and more competent in navigating that space. And Carr says it may be that the future of human empowered or human empowering automation rather than human eroding automation looks more like video games. It looks more like, Us on unlocking new map regions and new sets of powers as we find our way, as we learned, as if you think about like where the rubber hits the road here  is in things like, uh, I'll send you a link to this in case you want to include it in the show notes.

 

There's this fantastic series of design fiction videos by Keiichi Matsuda, who has been thinking about. Augmented reality in domestic and in urban life now for years and years, and has put out some really interesting, compelling, like short  sci fi videos where he shows people using an AI gaming layer to navigate the task list of their daily lives.

 

Or he shows people that are using autonomous AI agents to go through, a day of desk work.   , the last thing I'll say about it for now is that like one of my favorite science fiction authors and someone who has really, really shaped the way that I think about augmented reality and an automation is Charles Strauss, who's the protagonist of his novel Accelerando from 2005 Manfred Max is basically  like, we would recognize this person walking the street.

 

Today, maybe in San Francisco, like he's wearing smart glasses. The smart glasses have a kind of exo cortical layer that allow him to filter and process and synthesize terabytes of news a day. And then he's able to work with his sort of machine cortex, which is running locally. It's his, it's not running on remote servers.

 

And. He, and it's adapted to him personally in the way that like, Gordon Bell, who's a, another, major pioneer in the space had, has been thinking about this for years. My life bits,  his thing where it's like, it's a, it's a personally, it's a user tailored, search thing that kind of supplements.

 

Memory and then this character in, in accelerando Manfred Max takes all of this information and then is able to synthesize it into specific tech innovations that he then has his machine layer automate the patenting because that's like drudge work, right? Like that's not something you want to sit there having to do.

 

You don't need to write the patents. And. And then he's able to, the last piece is that he's able to use it to identify the people in the organizations that are most well positioned to benefit from these new patents and then work out the licensing agreements with them. And he lives as a venture altruist.

 

He lives as someone who, who maintains this enormous patent portfolio, but licenses it to for free. To the people who are most likely to use it for positive, some pro social applications, so he's making everyone wealthier. He's not really making any money, but he doesn't have to be because he's, he's living in this whole new kind of mutualistic paradigm where our technologically enhanced ability to innovate is, is generating so much.

 

Affluence that we don't need this. I mean, and then like the last piece I'll say about this is that like, there's the other side of it, which you've spoken to already on this call, which is,  , to the degree that we are obsessed with generating post scarcity, we're missing something crucial. We're missing something fundamental abundance.

 

That's why I like affluence more than abundance.  Because we already live in an abundance of there's too much for us. there's, there's, there's too much sugar, there's too many different news sources vying for our attention. And what we've realized in the course of trying to generate to end scarcity is that there are fundamental limits like the availability of human attention.

 

That we're not going to be able to design around there are only so many atoms on this planet, right? And so there's only so like, we can only, there are every little computation has some sort of cost energetically and materially. And so, long before we reach the hard limits on the efficiencies.

 

That we were able to generate in economic production. We are hitting walls on our ability to coordinate and our ability to understand the systems that we've created. And maybe the solution is not post scarcity so much as it is a recognition of where the, the real physics, like thermodynamic and ecological bounds are on these systems and then designing.

 

Systems that are not just sort of like endlessly abundant, but comfortable for everyone and how, how can we, how can we do a better job of distributing all of the new wealth that we create, the, yeah, so  I'll stop ranting there. Yeah, no, it's all interesting. I think that your, friend who's licensing  his IP and patents, I think that’s what altruism is. 

 

Part of the solution of a new economy. And I think  that it takes creative people to create a vision for what are the new possibilities and not profit driven, greed driven, big tech,  which has been the old story for the last 30 years. But how can we use this new tech for all tech and altruistic purposes? 

 

To  enrich the lives of everyone and, and not go down the dooms day scenario of lost jobs and robots taking over Skynet, 

 

another sci fi reference.  I mean, can I, can I just say that like, okay, so  I just had a nice long conversation with Gary Marcus at South by Southwest. And Gary is. A kind of well informed prominent chicken little in the space of AI, he was part of the, the congressional oversight hearings on all of this stuff.

 

I think he wrote the algorithm used by Uber, sold it, got rich and, and has since spent a lot of time ringing the bell that we need better regulation. Of this stuff and while  I understand the basis for his concerns, because, as Dr. Rowe has said, it's, it's still a long way before AI will replace most people's jobs, but we're already at the point where AI is being sold well enough to char.

 

That they think it will replace your job. And so what we actually have are organizations and,  so various other systems that are working worse than they did. And then, at precisely the moment where, we ought to be bringing, we have weird, enormous vacancies in the tech sector at precisely the time that it's just like going through this enormous spike in production.

 

I worry. And I've heard Doug Rushkoff say something similar that  worrying about Terminators, precisely the wrong approach to take right now or, or other horrible outcomes because the nature of these.  These tools is that  they amplify whatever biases we're feeding into them. And so it's like a child, like anyone listening with kids knows that if the emphasis is on the behavior that you don't want, then that's what the kid hears.

 

And that's what the kid trains on. And those are the, those are the behaviors you're more likely to have to deal with rather than redirecting the attention. of that child into the behavior that you actually want them to, to learn and, and so when it comes to, the amount, I have a lot of respect for people like Gary, who are trying to raise awareness about this stuff and inform the public.

 

But ultimately,  If everyone is trying to design around a particular outcome, then all of the attention and resources are going into that attractor. And what we want instead is, it's like the shift from cyberpunk sci fi to solar punk sci fi, right? Like we, we, we need to be paying. Much more attention to the outcomes that we can all agree are desirable.

 

And  yeah, so it is, I was, I was just going to add that.  I mean, I'm for some regulation obviously didn't happen with social media, didn't happen with crypto and we see the damaging results, but I think we can't use regulation as a crutch. I think we need to focus on imagination.  Right. Yes, because it's going past these directed outcomes that are based on efficiencies and business profitability to  let's imagine a world, right?

 

You talk about solar punk, right?  Hello. Yeah, that's where we need to be. Imagine a world, well, who takes us there? it's creative people. It's science fiction writers, science fiction writers. They were and are our prophets, of the modern, modern day. Much like the writers of the Bible were prophets in their own times.

 

So, we couldn't imagine Those planes hitting on 9 11, the towers that someone would have attacked us that way. That was our own lack of imagination, as a national security threat. We have to focus on imagination. You have to imagine scenarios, imagine worlds, imagine new economies and how you leverage this technology to solve the problems of today and create a new world.

 

Tomorrow. And I think that.  That is what's going to save us. But who are those, those people are, are they in government? I don't think so. It's going to come from academic world, the philosophy world, the science world, people with imagination  that don't get bogged down in whatever the current narrative is, whether it's doomsday or what an AI.

 

And, and look to the future and realize these are tools  and how are they going to help us to get to a better society, a better world and solve these problems.  Three cheers to that. Yeah. I, I mean, my slogan since. 2008 has been imagination is our greatest natural resource.  And I mean, I've since kind of played with modifying it to say that no matter how  amazing your imagination is, you're not really going to be able to bring anything back.

 

Without being able to pay attention. And so there is a limiting,  there's like, imagination may be our greatest natural resource, but attention is the resource that limits our ability to explore and express the imagination. And which is why we don't necessarily want to pin all of our hopes on.

 

Regulatory layers by which I mean regulation exists at all. Layers of society, you regulate your own behavior by, thinking about what you're going to do before you do it. But even that, like your body has reflexes for when you need to react faster than you can like sit there thinking about it.

 

Right. And so like, Yeah, I think overall what we, what we're going to find is we're going to have to develop a better balance between all of the different regulatory layers and the speeds at which each of them is most well suited to respond, in Congress. Responds at one speed, and the corporate world responds at a different speed and community decision making responds at a different speed and we make individual choices at a fourth speed.

 

And so, how can we structure  the decision making apparatus of society in a way that we're not asking questions at the wrong level?  that we're not, we're not trying to we're not. We're not sitting there philosophizing about whether to catch the ball that's coming at our face.

 

Right. But we're also not allowing our impulses to make financial decisions for us. I've been shredded, big learning in trying to, trade financial assets is like don't drink 300 milligrams of caffeine first and sit there with a trigger finger.

 

And, and so I think that everything you're saying is, is right on. And that I would just say like,  let's sort of decompose where we expect to actually find what we're calling governance or regulation, or like there's a sense in which the market. Is something that goes on in your own brain, like there are different sort of ideas, different proposed behaviors are competing for attention and blood sugar in your brain at any time.

 

So something like market activity exists in individual decision making something like top down government regulation exists in the family, for me one of the most interesting. Things about exploring all of this stuff. And one of the things I find interesting about you and the work that you do and that your group does  is in helping people  not only embrace imagination, like there's this great William Gibson, the co father of cyberpunk. 

 

Has this, his line from his book, burning Chrome. He says the street finds its own uses for things.  And so, yeah, it's like no inventor is going to be able to perfectly  pre specify every possible use case for any tool. And so why are we putting it on? Why are we, why are we basically blaming the inventor? Or the, the company from which something comes, like there's just, or the government that allowed this to happen, by regulating it in a way that allowed, for different, it's like, if we really want to, everyone's best case scenario  to coexist, then  we, we need to get. 

 

We need to not just be calling for regulation at the federal and state level, but we need to be calling for better regulation at the level of our cities and our communities, our neighborhoods, our families within our own minds, like there's that great bumper sticker. Don't believe everything you think, like if we all were able to insert a Regulatory layer  where  I don't just buy a new phone because it's new, but like I actually find time to sit there and, and reflect on the decision before I spend a bunch of money, Kevin Kelly, the co founder of wired magazine talks about this in his book, what technology wants, he says, actually,  the Amish do a pretty good job of this, they've decoupled themselves enough from the demands of the larger market. 

 

That they can get together as a group and reflect about whether or not they will benefit from adopting cell phones or whatever. they're not rejecting technology outright. They've just found a way to decelerate to the point where they're capable of Making of reasoning through things. And so how are we going to do that as people?

 

the last thing I'll say about this is that I'm working now with Andrew Dunn, who's the former head of innovation for the center for humane technology and Joshua Shry, who's the host of the Emerald podcast. And we're about to launch. A, a web course this spring on embodied ethics in the age of AI.

 

And this is one of the big questions is, how can we, now that we have these extraordinary powers, now that each of us is a magician that can speak new worlds into being,  it's no longer about.  The direction that it's clear innovation should be taking is not towards necessarily increasing our capacity, but increasing our ability to think well, to decide that we are, that we want things that are going to, that are going to have net positive benefits in the world.

 

And so like, yeah, how can we intervene? At the level of the people that are working in the tech companies, as well as the companies, as well as the systems that they're embedded in, how can, how can you work with your clients to make sure that they're not just being, that they haven't just like exported all of their decision making power authority.

 

Sovereignty to that, that we're not just all on the back foot trying to catch up to this stuff and making terrible decisions in the process. Right?  And yeah, the way we do that is through education and evangelism. And we're now entering kind of the last stages of our session here.

 

So I want to give the opportunity. Opportunity to you to promote whatever podcasts you have, which are, I think a couple and, and also new, new initiatives and, and, uh, where people can find you and learn more about what you're doing in, in your many worlds.  Yeah. Thanks, Curt.  Like I said, I have a new, I have askfuturefossils.

 

com, which is where you can interact with the, the AI synthesist for my show. And, that you can find the podcast and my sub stack, through that portal, I have a ton of other stuff up on link tree,  , link TR dot E slash Michael Garfield, including a link to the enrollment for this, this course that we're starting in April.

 

And then, yeah, you can get ahold of me, like I'm, I'm pretty approachable. I'm, I'm really, really interested in helping people think through these kinds of things, finding really fruitful. Generative questions. And I know that the folks listening to your show are people that have interesting problems.

 

And yeah, it would be great to have you on my show too, at some point. I'm someone broke my handle off on proposing collaboration. So,  I would love to find more ways to work with you and your Your network of brilliant people to help people  learn how to use these tools well, to relax into a kind of curiosity and play around some of these, these issues rather than make all their decisions from fear.

 

Yeah, this is the stuff that matters. So yeah, askfuturefossils.  com, I guess. Okay, awesome.  Well, and I like to play with purpose. That's one of my mottos because it's not enough to play. I think you have to have a purpose in order to drive adoption. Otherwise, people are just painting purple or rainbow pigs flying in the clouds.

 

And it's like, for what reason? Why? And you're just cluttering the LinkedIn feed with just garbage. Anyway, thank you so much, Michael. And thanks to all of you for tuning in and catch more of our RealmIQ: SESSIONS on your favorite podcast platforms. And please follow and smash that subscribe button.

 

It's very important. And Michael, thanks so much. You're a terrific guest and definitely want to have you back and have a great day and hope to see more of you in our local town here in Santa Fe, New Mexico.  Thanks, Curt. Yeah, let's grab a coffee. Okay. 

 

Hey there, are you the next AI unicorn? If so, you have to visit RealmIQ. com to learn more. about the AI accelerator program called SmartTrack. This is an eight week program where we help founders by offering comprehensive support in strategy, product development, branding, team building, and securing investor access. 

 

Our services span the entire spectrum, catering to both recent startups and growth stage companies with a special emphasis on those in later stages, demonstrating proven revenue. Learn more at RealmIQ slash startups. 

 

And thank you to our sponsor Ovations with a Z. Ovations is the first online platform to simplify the talent finding and booking process for virtual events.  Go to ovations. com that is with a Z and there you will find a variety of speakers, including myself. And if you're a booker, use the promo code Curt five, that's C U R T number five and get a 5 percent discount for booking any speaker, including myself. 

 

If your company is interested in reaching an audience of AI professionals and decision makers to promote your event or product, we have sponsorship opportunities on this podcast. Go to realmIQ. com slash sponsors. 

 

RealmIQ,  book your corporate AI workshop today.  CurtDoty. co, branding, marketing, and product development.

People on this episode