RealmIQ: SESSIONS

RealmIQ: SESSIONS with Jennifer Stivers

Curt Doty Season 1 Episode 23

Send us a text

The RealmIQ podcast episode features Jennifer Stivers, a professional deeply involved in ethical AI and AI literacy. The conversation explores her journey from high-tech marketing to her engagement with AI ethics, emphasizing the transformative impact of Generative AI. Jennifer discusses key ethical concerns such as responsible data usage, implications of synthetic media, corporate AI governance, and the importance of regulation in AI. The discussion advocates for proactive AI education, governance structures within companies, and ethical practices to maintain trust and avoid brand risks.


Topics Covered

  1. Introduction to Jennifer Stivers’ Career and Background:
    • High-tech marketing during the .com boom.
    • Experience at Apple under Steve Jobs.
    • Entry into AI ethics and literacy.
  2. Generative AI and Ethical Challenges:
    • Evolution of AI from Siri to ChatGPT.
    • Generative AI's transformative role in creativity.
    • Responsible data usage in AI tools.
  3. AI in Corporate Settings:
    • Experiments with AI tools during Jennifer's tenure at Coursera.
    • Importance of ethical considerations in corporate AI deployment.
  4. Synthetic Media and Voice AI Ethics:
    • Use of synthetic voices and ethical dilemmas for creatives.
    • Ownership, consent, and copyright issues in voice AI.
  5. AI Governance and Regulation:
    • Necessity for centralized AI operations within organizations.
    • Advocacy for proactive AI ethics teams in companies.
  6. Regulation and Public Policy:
    • Need for regulations to balance innovation and ethics.
    • Examples from social media and crypto industries as lessons.
  7. Brand Implications of AI:
    • AI’s impact on brand trust and perception.
    • Importance of transparency and ethical AI use in marketing.


Pull Quotes

  1. "Think before you act. Learn more about AI, understand the pros and cons, and think about the implications before putting something public-facing."
    — Jennifer Stivers
  2. "Ownership of your voice is critical; some companies ensure it can only belong to one person and must be consented to."
    — Jennifer Stivers
  3. "Governance within a company isn’t just hiring a chief prompt officer. It’s about a centralized AI operation that interfaces with all departments."
    — Curt Doty
  4. "Regulation won’t eliminate innovation; it will smooth out the curve and help companies avoid legal limbo."
    — Curt Doty
  5. "Without AI ethics, companies risk turning negative brand experiences into long-term trust issues."
    — Curt Doty
  6. "Synthetic media offers opportunities but also requires clear nomenclature to avoid fear and promote adoption."
    — Curt Doty
  7. "Every company needs an AI operation to guide ethics, legal, and implementation to excel and stay competitive."

Support the show

Sponsor our show - https://www.realmiq.com/sponsors

Receive our weekly newsletter: Subscribe on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7024758748661391360

Sign up for one of our workshops: https://www.realmiq.com/workshops

Are you an AI Founder? Learn about our AIccelerator: https://www.realmiq.com/startups

LinkedIn: https://www.linkedin.com/in/curtdoty/

Need branding or marketing? Visit: https://curtdoty.co/


Hi, I'm Curt Doty with RealmIQ. This is our podcast, RealmIQ Sessions, where we talk about everything AI with AI leaders from around the world. Please give us a follow or subscribe. Today's guest is Jennifer Stivers. She is into ethical AI and AI literacy, among many other things. Welcome, Jennifer. Tell us about yourself and your career and how you're passionate about AI. 

 

Thanks for having me. I've heard some of your other guests, and I feel honored to be in their company. So, I appreciate it. No problem. Happy to have you.  My background is marketing mostly high tech. I've always been really interested in emerging technologies, though. I joined an internet startup during the.

 

com boom and saw that go all the way through each stage until we went public.  Later I joined the Apple I was a senior mark on manager under. In, in the Steve Jobs era, so obviously I learned a lot during that time. Yes, yes. It was fantastic in terms of learning branding.  A great yes, yes. And I got the chance to observe and be involved in what goes into creating and protecting such, well, mostly protecting such  a well-known brand. 

 

Yeah.  So then how did you get involved with AI? Or was it just a year ago like the rest of us? Or were you involved prior to that? Because you're right in the hotbed of technology there in the Bay Area.  I would say it goes in stages. As you know, AI has a number of different definitions. So, if you're talking about workflow changes and text to speech and text to video technologies I was in advertising back in 2017. 

 

So at an IAB conference  I got involved in conversations that were centered around truth and advertising because the big concern at that point in time was fraud, potential fraud.  So refreshingly, nobody that I met there had any interest in tricking anybody. Everyone, yeah,  yeah, everyone was really more interested around how to protect the truth in media and safeguard it and  If you did have access to a tool like  text to video or text to speech, there would be,  you know, mostly the onus was on yourself and I didn't have any particular vested interest in it besides curiosity, but it would be,  you, you would take.

 

The workflow that you would normally have to hire a crew and do a reshoot.  You would be able to kind of tweak some ums, some ahs, change a couple of words, and the person being videoed would have the say over if that was how it was used or not. Yeah, they would or they wouldn't have a say  they would, they still had to approve their new likeness, right?

 

With the edited video, manipulating their voices, right?  Right. And again, that's not something I was directly doing. I was just that was the part of the conference that Intrigued me the most. Yeah. Well, yeah, has been around for a long time. First coined 50 years ago. And then has had many iterations.

 

And in fact Siri was, an AI feature it really didn't intelligently answer you back like CHATT does now. It was more of an audio type of search thing, and then Alexa came actually three years after that. So, all these were progressions of ai. I think the big thing. What happened a year ago was Gen AI and the fact that it was generating creative,  right?

 

Creative writing responses to your creative prompts that, that, that was the game changer. And so, when that came about a year ago, as we just celebrated the first birthday of ChatGPT.  Which, you know, open AI had started in 2018. So, they've been around, they had been around for five years prior developing all this.

 

So, it wasn't like it just magically appeared. But what were you doing in December, a year ago,  when you discovered This new software.  I feel very lucky about where I was right place, right time. I think I was contracting at Coursera.  And so that's just a good place to be in terms of somebody who.

 

Is surrounded by smart people and who's interested in learning. So, my job there, my contract was specifically marketing a partner marketing and anything that I did or was involved in was purely human generated content.  Granted the leadership there did recognize when chat GPT entered in such a. 

 

Public way back in December, and they made it clear that they saw it as transformative,  and allowed us to participate in experiments with different technologies.  So, nothing to do with.  anything that I put out there public facing or that was part of my job. It was just a nice place to learn. 

 

Yeah. So what was it that they gave you pilot projects to experiment with or, you know, take, take a normal job or project and see how it could run through an AI. LLM to see what the results would be just to play around with purpose. As I say, I’m not 100 percent sure on what, what I can talk about externally.

 

But they just allowed us to experiment with different tools and  in a guided fashion. So, you know, there was nothing again that I did that.  Was part of my specific job or that was external facing.   And then how do you feel about that duality of you know, work done by a human versus work that's enhanced by a I that's corralled and orchestrated by a human. 

 

That's really, that's where the ethics comes in, right? That fascinates me a lot. Okay, I know you're into AI ethics, so how do you deal with that as a creative, as someone who realizes there's problems with ethics and AI, and like, what are the discussions you're having, or what do you advocate?  I advocate Think before you act. 

 

So, I think it's, it's something that people it's here, right? So everyone, it's going to impact everyone in some fashion or another, but I really encourage people to learn more about it and to understand the pros and cons of it, and to really think about the implications before they,  um, Put something out there public facing like a  homemade chatbot, for example. 

 

Yeah, there's, there's you can create your own GPT now, but in terms of working within an organization I think what you're implying is that you should be cautious about what you enter into any LLM because. It ends up being public. And you don't really know where this information is going.

 

So, you probably shouldn't put in your company's prospectus into the, into the system and like, you know, Hey, write this better. Or your Q4 earnings. And can you make a table out of this for me? Right. I mean, it's really trying to be as Generic as you can and you're prompting. So, nothing. There are no proper names.

 

There's no names of companies or brands necessarily what you're entering. Is this the advice you're giving is to be on a pathway towards at least within the confines of corporate structure how to use it in an ethical way and not destroy the company's IP.  Yes, definitely. That's a huge element of it.

 

Because once you put in proprietary data, you don't know exactly how that's going to come out.  So that's one aspect of it. The other is after I left my, after my contract ended, I continued to take some formalized courses. And then just experiment with the tools on my own.  So some things that might not seem so obvious to people I would think it through all the way  in terms of what are the implications of it. 

 

So prior to working with generative AI tools,  like you, I've been involved in the creative process.  So, it would normally involve hiring a graphic designer going through a lot of vetting before. Something has a logo put on it and it goes out to the world.  Also, if any content was generated, it would be human created and you could stand behind it that way.

 

Ethically.  So, I also have run audio ads in the past where I hire a voice over and run those campaigns.  So all of a sudden I'm in a world where. You really don't need to do any of that.  And I struggle with what that means. Yeah, right. So I seek out Yeah, I mean you being empowered as a creative amplifies  what Were obstacles for you before, or not necessarily obstacles, but steps in the process that you had to outsource and pay for your client paid for them.

 

And it could either be cost prohibitive or.  You know, many other factors enter into how to execute on a creative campaign. And now here you are specifically in voiceover. Let's talk about that because I use, I use AI voices for many things, you know, to voice my blog. I put them in promo spots and You know, when I'm, when I'm listening to them, I'm, I'm looking forward to, you know, does it sound authentic enough  where it's not discernible?

 

People wouldn't say, Oh, that's so AI, right?  But I'm, I'm surprised at how realistic it is. I mean, the samples are human, right? So it's just about the interpolation of your script back out into the world and there are controls, you know, that you can, you know emphasize words and put in pauses and, and work with it, right.

 

Just like you would work with a voice. So, for artists, like I remember, you know, the famous line I used to use is like, give it a little smile. Right. What does that mean? Right? It's like, Oh, just, you know, uplift the interest, you know, so, you know, prompting becomes the same thing, you know, if you're generating it through a prompt, but if you're just literally feeding in a script and manipulating the words there are some surprising results and that are, you know, what? 

 

I'm using it. I'm sorry, but I have friends that are voiceover artists. Forgive me, Matt. That, you know, I love absolutely to love working with, but in a crunch it's like, yeah, I'm going to do it. Is it ethical?  it's ethical. I mean, there's nothing. it's not using a celebrity voice, right?

 

And, and using it in a wrong way. It's like, no, these are supposedly voice actors that were  perhaps sampled given a persona in a library and  were, you know, have generated themselves as voice avatars. So, I don't think it's an ethical issue. It's a dilemma of creatives. It's like, am I never hiring my friends, the voiceover artists again? 

 

You know, those are dilemmas but I don't think it's an ethical dilemma. What do you think?  I've been educated a bit more about it just because I went to a talk in San Francisco with some panelists that were from leading AI voices. Voice technology, voice audio companies.  That was a really interesting panel to listen to because the ethics were discussed in terms of permissions and there was a healthy discourse back and forth of musicians, voiceover folks in the audience asking questions. 

 

I didn't know that voice was something that doesn't have the same copyright as faces or likeness.  So, that's a battle that  seems to be happening now is the ability to own your own voice and to give consent for how it's used. Yeah.  So there are different companies that approach it in different ways. 

 

At the particular time of the talk, the script had some specific rules that I liked, which was you had complete ownership of your own voice.  You could gift it to somebody else to tweak for the workflow, but the ownership.  It could only belong to one person and that one person needed to be alive.  So, if you think about things you've seen, there are documentaries you've seen probably where it's narrated by somebody who is no longer living. 

 

So each, each provider has its own rule.  So that's where the ethics kind of come into play where. You can look at it from, on the one side, if you're a voiceover actor, you could think, okay, if I  use a voice clone,  um, I can reach a broader audience and have my voice translated in, in  languages all over the world and have a broader reach. 

 

Or you could think, I’ve heard from some who say that they're the connection of a human voice is how they connect to people and they don't want their voice.  ever synthesized. Yeah.  And, and that is, that is, that is the category synthetic media.  Yes. Is that, I mean, is that a term that you've heard in these discussions?

 

Because I, I like the term it gets away from avatars, gets away from artificial And I think it more accurately describes what we're talking about, you know, its synthetic media, but presents it in a positive light because, you know, nomenclature is important and, and nomenclature helps adoption and removes the scary element.

 

And if it's admittedly synthetic media. You can be up front with that. You're not trying to really fool anybody. You can say, I use synthetic media. Now, how that's tied to an original voice or anything like that, I know there's, you know, lots of issues around that. And I certainly, the Hollywood strike, the actor strike even though I think it was actually settled today or signed  ongoing concern about the use of likenesses and who owns what, who owns the capture, you know,  and it brings up a lot of ethical issues around rights, you know celebrity rights but even  you know, character actors don't want to be replaced, even though, you know, it gets into the whole special effects category because, you know, if you watch Lord of the Rings, the, the, the battle of, Minas Tirith.

 

It's like, yeah, most of those soldiers in that great battle were all CGI. So they didn't necessarily take away jobs from because they were never going to go shoot that in a big field with, you know, a thousand actors. So, you know, it was all digital. And so, you know, Hollywood has been doing this for years.

 

It's like using technology around CGI to help tell stories in a more cost-efficient way.  And so, um, it's, it's not foreign to Hollywood. So, I've just been surprised by the backlash like, like this has never happened before. It's like, it's been going on for a long time. But you know, it's when you, an actor loses control and, and the, 

 

 let's just say, they did a full motion capture of an actor of you know, James Bond and then didn't need him in the next movie as an actor because they could make a CGI James Bond movie. And that actor wasn't compensated for that. You know, that's, that's the concern, you know, that hasn't happened where, you know, there's a synthetic James Bond out there making movies, but Yeah. 

 

You know, that's the threat loss of income, loss of control, you know, because that original actor, whoever it might be, whoever might be the new James Bond, uh, would have to, you know, approve a script to like be involved with the project, right? But if, you know, the script is done, you know, without any actor buy in and they use synthetic actors, then they can do whatever they want.

 

And that's the thing. So, I think that's the biggest fear. I Don't think it's just the use of AI because that's basically the use of supercomputing to help. Filmmakers make movies and realize dreams and worlds that they couldn't normally think of.  So you know, I think  there's certainly ethical considerations around the use of likeness, but a lot of these, like the voices I use, they're not celebrity voices.

 

You know, I'm not using the Morgan Freeman, which, you know, there's been a Roman Morgan Freeman double out there for, for years that you could use as a real person.  Because you couldn't afford Morgan Freeman in a, in a commercial. So um, so I, I feel I'm in a safe territory when I'm using really these no name, synthetic voices to voice some things for me. 

 

I, I don't think I'm hurting anyone, but, no noting the larger concerns.  So, so tell me, tell me more about, you know, your ethics philosophy and what you're doing to advocate for ethics.  I'm mostly bringing up these kinds of conversations, these kinds of issues, because, you know, you, you come from the, the filmmaker Hollywood background, which I find fascinating.

 

I'd love to hear more about that.  But I'm thinking on the Day-to-day use of where a business might cut costs and move forward with synthetic voice, for example. Think about sales, sales calls. Yeah. So customer service,  customer service,  right. So, I mean, there's, there's current lawsuits going on right now about. 

 

Wiretapping, you know, if you think about a customer service call that you have with a human, you're kind of used to the fact that somewhere in the background, it's going to say you, your call is being recorded for, you know, training purposes. Well, what if it's a bot that calls you and you don't know that it's a bot? 

 

And it's, it's gathering personal information about you.  So, there's some retail stores that are going through lawsuits right now on that, where it's why it's considered wiretap.  Wow, that's an interesting category.  Yeah, you're right.  Yeah, this whole idea of.  You're being recorded. Okay. And it's like, okay, but what are they doing with the recording? 

 

With AI, you're, you can learn a lot from a recording and which is data, right? So how do you protect your data?  And wiretap is a crime. Wow. That's yeah, had not considered that. So interesting. So yeah, there’s, there’s evildoers. They will always figure out a way. And I guess, you know, companies, well, data is the new goal.

 

Data is the new oil, right? That's, and so what's happening with your data. So whether they're capturing your image, you're capturing your voice, your inflections, information about you captured in these service calls just think about the stuff you talk about, right?  Right.  And then if you think about how some of these AIs were introduced outside of voice, okay, let's just talk about in.

 

AI in general, a lot of people's introduction came through Snapchat or some kind of social media app that their kids use. And maybe they weren't aware of, they have no, nothing to do with chat, GPT or AI, but maybe their kid is interacting with,  with the AI bot.  Or some emotive AI.  that are asking, you know, and dishing out personal advice out of context. 

 

Right. And they're just capturing your, the video of you and you're like, you're in front of a phone.  Is it recording?  Or I'm purposely recording myself. That's data, right? That's the TikTok controversies. Like what are they doing with all that, all that video, all, all that voice, all that,  all that stuff. 

 

Right. I mean, we don't know. Exactly. Exactly. So that's, that's where I get interested, but I don't want to sound pessimistic. Okay. I like to bring up the issues so that people think through them.  But I think we're in a period of transition.  And so,  um,  I guess what I think that means is, you know, what do you, what do you think about what's happening with regulations or the attempt to regulate,  um, there was a statement from the Biden administration that, you know, the, the, the tech companies  had to prove that, you know, these are good.

 

Intentions. Any technology has good intentions versus bad intentions. They have to prove it.   but that is a guideline. How is it enforced? How is it reviewed and approved? Is it going to be like, you know, the FDA has to approve a drug and you have to go through testing to prove the point that, you know, Yeah, it killed lab animals, but it doesn't kill humans.

 

So what, what, what does this technology do harmfully or, or not to people?  So, what do you, what's your point of view on regulations and are they going to work?  Any positive news there?  I honestly don't know. I try to tune in to what’s going on legislatively.  When the executive order happened, I reached out to people in different countries to understand how it was being interpreted in different regions. 

 

 You know, I only have my own moral compass and my own guidance to go by, which is, again,  first do no harm.  Think before I do anything.  And then I try to ask questions about what's happening legally.  So I think, I think the world that you and I are in, which is a very active community of learning things on a day to day basis.

 

It's like each week the technology leaps forward and I think you and I are in this. Like stay tuned daily.  It's a constant churn, hard to keep up with.  you know, you do a product review one week later, then it's like useless because someone out did it and then you're reviewing that product and it's exhausting and without.

 

Regulation what's happening is it's tech wars that are happening. And so the acceleration of Is a spike in innovation with these companies trying to beat each other with all these features and you know It isn't this slow progression of progress over decades It's like literally Daily, weekly, you know, straight up hockey stick, not even a hockey stick.

 

It's like really the right angle straight up and only regulation will start to smooth out that curve,  which it won't, it won't eliminate innovation. And that, that's the, the pendulum is like innovation regulation. What do you want? You can't have both. Well, You kind of have to have both.  It does slow innovation, but it's important.

 

You know, if you think about regulation and the lack of it, look where we are with social media, never regulated. And to this day, You know Facebook and Instagram are still you know, involved in lawsuits with states saying, you know, they're supporting some bad things as it relates to you know, exploitation of Children.

 

So, that's, that's social media regulation and crypto never happened. Look, look what happened, right? People lost fortunes. So, so I applaud. The government's effort, both here and across the pond, they’re trying to do something, even though it's hard to reach agreements. But, you know, this never happened with social media, and it should have absolutely.

 

Yes. So here we are trying to do something. So, I applaud the effort to try to do something. Will it be effective? I don't know. I don't know. But at least we're trying to do something. And I think advocacy with People like you and I  voicing opinions and raising concerns and you more so than me, you're, you're the one asking, you know, ethical questions. 

 

I think it's really important for people like you to have a voice and not, you know, when they, when they list, oh, the 20 most influential people in AI on the cover of time magazine, let's say it's like, well, yeah, I don't know any of those people, but I know. A lot of other people who are even more knowledgeable and, and, and advocates and evangelists in different ways that those top people are just figureheads or they're the industry leaders or they're big tech.

 

And it's like, I think we shouldn't  necessarily be. Putting the leaders on a pedestal when a lot of the implications of AI is so pervasive, it affects every industry and every strata of any company that I think we need to have voices beyond big, the leaders of big tech being the biggest influencers in it, right?

 

Yeah, I mean, I.  I think we're in this period of transition, and I agree with you that we need, we need some governance. We need some legislation. I think we also individually have to hold ourselves accountable for trying to use people's data responsibly. Try to get educated on how this stuff works beyond a few cheat sheets. 

 

 I think enterprise, you know, now that Google's come out with theirs, you know, now we've got the, the big players, right? We've got open AI, we've got Anthropic, we've got Google stuff is going to roll out. It's going to start showing up in more and more apps that we use on a day-to-day basis.  So I guess what I would advocate for in the coming year.

 

Is that I'm sure I would hope the leadership teams take it seriously in terms of, of  what their privacy policies are, what their  Responsible AI approach is, it's going to vary company to company, but I also think there's a need for  people who  are trained on the tools. To work within the companies or to help train, train teams. 

 

it’s just because you roll out a tool at a company doesn't mean that each team is individually trained, maybe, maybe they use chat GPT at home, the free version and. They don’t know  what, you know, how to translate back and forth between the two worlds, what's work and what's home. Yeah, really good point. 

 

You know, my, my philosophy is on reorganization of corporate structure. It is really about. Less of a reorg. It's an AI org. That's what I call it because what they need to do, some companies are doing this, you need to have an AI operation. It isn't just hiring a chief prompt officer, which is bullshit.

 

You know, it's like, well, you're going to pay C suite level for someone to teach people about prompts. It's like, no, an AI operation, a human centered AI operation needs to be implemented. To interface with all the departments.  And provide ethical guidelines upskilling, training all those things that should be centralized within any company so that there's cohesion and a unified vision on what to do with this technology.

 

You can't just have. Marketing over here saying, Oh, yeah, we're using it for everything voice and video. Oh, great. And the legal team saying here, well, we'll wait a minute. What are you doing over there in marketing? And then HR is, you know, doing profiling based on people people's you know, fake employment videos. 

 

And they're saying, yeah, we love it. It's like, okay, but that's not real either. It's like, what? What are y'all doing? You know, so there has to be some centralized governance within a company, let alone, you know, governance from a government. Right.  But, you know, it's just like, you know, yeah, HR wasn't a department 100 years ago.

 

Well, now it is. So now Who's guiding ai? You know, who, who's the new HR for a, for ai?  You know, so it's really question. It's reimagining corporate structure and I believe it's, it, it's a, a centralized AI operation that advises trains and governs the activity using this technology for every department.

 

cause it affects every, every department.  And if you have that.  You will excel as a company and beat out your competition because it shows leadership and you're not going to be the ones out on a limb, on a legal limbo, violating something down the road. So you know, I, what do you think about that philosophy or approach of governance within companies? 

 

I like it.  That's assuming companies are going to want to do the right thing.  I don't know your thoughts on  some of the things that have come out, like.  I've been a bit vocal in my article about the pros and cons of wearable technologies.  So I see that as having really complex privacy issues that have not yet been figured out. 

 

They, they can't even figure it out with policemen wearing, is this, did you, did you turn off your camera, your body cam? It's like, oh yeah.  So, then there's no record of whatever, right? And, and then you and me, it's like, Oh yeah, I'm going to push this button here. And I, I'm going to record our little conversation.

 

You didn't know I was recording you.  And it's like, wow. Wow. Horrible.  Yeah. So again, that's just.  Education, I feel like the best thing we can do is sort of educate the public on, you know, some of this stuff is pre ordered, you know, rewind pendants that are just necklaces that record every interaction around you.

 

And glasses, it's already been an issue. Yeah. You know AR, you know, not being able to be realized because it's like, are you recording me with those glasses you're wearing? Right? I mean.  Yeah, that's an odd world yet, yet it's, we need to lean in. And so I like your concept of governance within a company of having an AI person think through all the ramifications of it and.

 

Am I hearing you correctly about like somebody?  Yeah, I mean, it could be one person. It could be a group. But I think they need to be experts and they need to have you know, background in ethics and HR and legal so that there's guidance. And I think that should also have be happening within the  companies developing the technology. 

 

Right. They should have um, an ethics operation within the company because they're developing AI. It's redundant to think there's an AI operation, but, you know, it's somehow centralizing ethics and legal.  And the operational structure of how this stuff gets done. If, if I come from the development world also where, you know, you have agile technology, it's like, okay, you're testing, you're testing it.

 

You know, at what point does the legal get to look at it? And at what point does the ethical group need to look at it and challenge things. So, there is a iterative process in, in development. How do you incorporate that iterative process? Throughout a whole life cycle of business, whatever your business is,  uh, I have, I adopted a lot of agile technology into my branding methodology and I love it and because it's consensus building, right?

 

And so how do you take agile into an AI operation? And I, I think it's, it's fascinating to think about. And I think that's the disruption that's happening, but there needs to be a vision to where you go with this. Otherwise you just have. McKinsey consultants coming into a company and saying, you should do this.

 

And then they walk away and the company's left with, okay, yeah, we just had this workshop and great. They did a bunch of demos on chat GPT, but how do we make it work for our company? Right, right. It's not just educating people how to use the tools. It's what are you doing with the tools that that's your concern, right?

 

It's like, think before you do, right? What are you doing?  Yeah, there should be a business need behind anything. The business need should drive.  Whatever the product is, right?  So if I think of traditional setups within companies, there's product development, product management, marketing, marketing, communications, legal, there's  all of these groups that have to work cross functionally with each other. 

 

And you know, what good is marketing without sales? What good is sales without marketing? What good is any of it without engineering?  Or product. You have to have something to sell. You have to have people selling it, and you have to have  marketing that listens to the needs of the customer.  So where I guess AI plays into this is  it needs to live within all of those things.

 

It can't replace those things.  You still need humans in the loop. Yeah, no, you shouldn't be firing people. You need to be training people.  I truly believe that. And that's another reason why I've formed Realm IQ is like to really promote upscaling versus firing. And you need to train your people. And part of that training is also, you know, the reimagining, reimagining the corporate structure to have a functional and ethical AI implementation in whatever department or multiple departments within a company.

 

So yeah, I think this is fascinating. And I'll also, I, you know, I'm a branding guy. So, as you talk about.  At any vulnerable point of those communications, whether it's comes from marketing or an experience with sales, those become negative. It can become negative brand experiences, and they reflect on the brand. 

 

Right and so people think branding is a logo and it's like, no, it's like people's perception of I don't trust. This customer service because now they're using avatars right or synthetic media that that is a reflection of the brand however if they do it well and execute in a way that they build the trust and they're honest and open and are authentic in the way they're presenting this as a Either an option or an alternative or, or the way that the world's going and they have a talk about it with their clients, then that's building trust.

 

And, and that is good for the brand. So, all these things ladder up to the brand and that's why it's really a C suite level, the CEO level type of decision on, on what they're going to implement with their company.  You know, Coca Cola, you know, doing using AI imagery in a commercial. Oh, great. Yeah, that's, that's fluff. 

 

Right. It's just fluff, but  you know, the fact is it could be implemented in every aspect of their business. And then, you know, that, and it's less sexy, right. But it helps make them more profitable and more productive and be with the times in terms of technology. I'm not singling out Coke. I'm just using them as an example. 

 

But listen, we're, we're over our time. I have really enjoyed this conversation and I want to give you the last minute or so to just plug how people can get in touch with you or projects that you're working on or conferences that you might be speaking at in the future. Just plug away and I want to support you in that effort. 

 

Thank you. For now, I'd say just connect with me on LinkedIn.  Chime in to any of the conversations that you know, you can check out past articles and make your point of view known. If you have suggestions for other topics, you'd like.  Discovered. I'm happy. You can send me a suggestion. My website is marketmind.us. Thanks, Jennifer And thanks for tuning in everyone  and catch more of our realm iq sessions on your favorite podcast platforms like Spotify apple podcast amazon music iHeart podcast google podcast and YouTube Please follow and subscribe.

 

We're going to have you back Jennifer. Okay, and thanks so much again  Thank you.  Thanks everyone. You can now catch Realm IQ session on your favorite podcast channels, including Spotify, YouTube, Amazon Music, Apple Podcast, and iHeart podcast. 

 

If your company is interested in reaching an audience of AI professionals and decision makers to promote your event or product.  We do have sponsorship opportunities.  If you enjoy these discussions on AI, please push that subscribe button below. I'll see you in the next video. 

 

Realm IQ.  Book your corporate AI workshop today. 

 

Subscribe to our Media Slam newsletter and learn more about the intersection of design, content and technology.  CurtDoty. co  Branding, marketing and product development.

People on this episode