
RealmIQ: SESSIONS
RealmIQ: SESSIONS is the podcast where we dive deep into the world of generative AI, cutting-edge news and it's impact on society and business culture. Listen in on conversations with leading AI Experts from around the world. Our relationship with technology has undergone a captivating transformation. Machines have transcended the role of mere aides; they are now instrumental in fundamentally reshaping our cognitive processes. In this context, AI evolves beyond an intellectual collaborator; it becomes a catalyst for change. Hosted by Curt Doty, brand strategist, AI expert and AI evangelist.
RealmIQ: SESSIONS
RealmIQ: SESSIONS with ALEX SERDIUK
In this episode of RealmIQ: SESSIONS, Curt interviews Alex Serdiuk, co-founder and CEO of Respeecher, an Emmy Award-winning AI voice cloning company based in Kyiv, Ukraine. Alex shares the journey of building Respeecher, its early breakthroughs, and how the company now works with top Hollywood studios, game developers, and the music industry. He discusses how synthetic voice technology is being ethically applied to dubbing, accent correction, and resurrecting historical or deceased voices—while also confronting controversy, fear, and ethical landmines in emerging media.
The conversation dives into voice cloning used for Luke Skywalker in The Mandalorian, accent correction in The Brutalist, and the resurrection of hip-hop artists' voices through Vocal Roots AI. Alex emphasizes the critical importance of ethics, permissions, and trust when using synthetic voice, and shares insights into the future of voice-first interfaces, synthetic media, and new real-time applications of Respeecher's technology.
Topics Discussed
- Origin and evolution of Respeecher
- Voice cloning breakthroughs at Grammarly Hackathon
- AI use in Hollywood films (The Mandalorian, Book of Boba Fett)
- Gaming industry’s demand for scalable voiceover solutions
- Use of AI for character voices, aging voices, and localization
- The Brutalist controversy and media misinterpretations
- Accent correction and AI’s role in post-production
- Ethics and permission-based voice synthesis
- Vocal Roots AI: Reanimating hip-hop legends with Frank Nitty
- Synthetic voice for multilingual music and interactive artist engagement
- Deepfake concerns vs. synthetic media
- Trust and transparency in AI creative collaborations
- New real-time text-to-speech capabilities
- Respeecher’s future in hospitality, holograms, and real-time AI agents
- Challenges of educating clients, investors, and creatives
- Concept of Creative-Centered AI and synthetic media as a legitimate category
Quotes
"The goal of high-quality synthetic voice technology is to be indistinguishable from real speech."
— Alex Serdiuk
"We are conservative with ethics. No one can use Respeecher without the voice owner's permission."
— Alex Serdiuk
"Trust is what makes top talent like Mark Hamill and Adrian Brody comfortable using our tech."
— Alex Serdiuk
"The technology is neutral. It's creatives who define its purpose and elevate its potential."
— Alex Serdiuk
"Creative-centered AI is what will elevate this technology—it’s not just tech, it's about imagination."
— Curt Doty
"Synthetic media is a new category that deserves distinction from deepfakes."
— Curt Doty
"We’re at the edge of a change where our devices aren’t just tools—they’re partners."
— Alex Serdiuk
Sponsor our show - https://www.realmiq.com/sponsors
Receive our weekly newsletter: Subscribe on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7024758748661391360
Sign up for one of our workshops: https://www.realmiq.com/workshops
Are you an AI Founder? Learn about our AIccelerator: https://www.realmiq.com/startups
LinkedIn: https://www.linkedin.com/in/curtdoty/
Need branding or marketing? Visit: https://curtdoty.co/
Welcome everybody. Let's get into it. We are well into our season three, and I'm excited to introduce today's guest is Alex. He runs the Emmy Award-winning voice technology company. ReSpeecher, an AI voice cloning technology based in Kyiv Ukraine. He's coming from Kyiv today, Slava Ukraine. He is using AI the right way, and I applaud his efforts.
Welcome, Alex. How are you doing? Thank you so much. It's a pleasure to be here. Yeah. So, I'd love to learn your more about your background and how you got into ai and, and founded your company and, and all the cool things that you're doing. Please elaborate. I. Yeah, for sure. I mean, it now feels that the whole background is this feature.
cause we've been doing this for seven years as full-time story and for three, four years before that as just an idea. Before that, myself and one of my co-founders, we were working together in data analytics. Domains. So, we were analyzing data of banks and other financial institutions in order to improve their marketing, which is not something we were very much excited to do for a big chunk of our lives.
So, we were looking into different things and at some point, we tried this technology at, at the hackathon organized by Grammarly, one of Ukrainian unicorns here in Kyiv. Mm-hmm. And we won Hackathon with a very simple. Technology of voice conversion. So, we basically made one voice sound like another.
Then we started to talk to the industry. We were excited about the film industry, music industry, animation, video games. And those folks provided us with the feedback, which was very, very concise. If you guys can make it work in terms of quality, we would use it because everything we see out there is not usable for us for high quality content.
It's, it just doesn't cut it. So, we were focused on quality of the sound first, trading off, usability, scalability, and ended up getting into Hollywood projects starting from 2019. And since then, we've been delivering. Quite a few projects a year as an agency, but now we transition to a platform for content makers and the marketplace for IP owners, for voice owners.
That's awesome. So, when like the gaming industry contacted you and said what wasn't working for them, what was that specifically and what problem did you solve? Yeah, I mean the gaming industry is, first of all, it's really hard to draw a very clear distinction line between feature film and a video game.
cause the process is somewhat similar. And also, characters, they travel from one to another. There is a lot of DER reality characters in those. Mm-hmm. But what's special about video games, they. Require a lot of voiceovers, like a lot meaning like 40 hours, 80 hours of voiceover. It's massive. And in order to do so, they need to get those voices in front of microphone for all this time, plus all the required overhead for recording.
And additionally, to that, they need them to be available to rerecord. cause when you create feature, you do principal shooting, and then you do a DR. Right after everything is done. But in the video game, it's like a constant a DR. So, you need to change all those stuff throughout the process which is, which is a huge load.
So, what we do with the technology, we help with those voices to be available when a particular person is not available. But also, we do character voices. cause I mean, for voice actors it could be really hard to get it in terms of the character voice. They could be doing this squeaky voice for like 40 minutes and then they have to rest all day long.
Mm-hmm. With the technology, they can just use their normal voice to drive their own character voice. We do the agent when voices voice solvers aged, but they need to sign and sound exactly like they sounded a while ago. And we do some voice recreation voices from the past, from the video games.
Yeah, well, I, I read your Mandalorian story with young Luke Skywalker. So that was cool. Tell me about that and how that worked, because that was kind of a top-secret mission for a while and then came out and it was really cool. Yeah, it was quite a project. I mean, it was in early days of risk feature, we were engaged with Lucas Film and Skywalker Sound People, and the task was to recreate Luke's original voice from 45 years ago, from a while ago. And it was recorded on tape when Skywalker, when Star Wars were not that, that big as it's now, right? So, we had access to those tape recordings. We trained our models, which were.
Way less sophisticated than those now. And it took us a while to, to make it not just because of the technology limitations we faced back then, but also because of the way how Lucasfilm operates. Those are, very, very picky sound engineers in a good way. I mean, they, they do thousands of takes then they run those thousands of takes through.
Right. You know. Yeah. They, they run it through like 12 models we created for them, and then they select and then we go through rounds of feedback and stuff. Mm-hmm. So, it took quite a bit of work. Actually, we did the same voice for the book of Boba Fat using newer models. So, it sounds a little bit better to our ears in the book of Boba Fat, but it was amazing to spin it off.
I mean, it's, it was extremely complicated project. And back then with. Thought that we are dealing with all this data we could ever deal with. But just this year we had to recreate voices from V cylinders. The very first recording device for gel documentary called Endurance about Shekel and his team.
Okay. Alright. Since we're talking about the movie world you were recently in controversy around the brutalist, which to me was a kind of a minor element, you know, kind of under the, you know, special effects category, which, you know, films were always using special effects to like to finish a film and yet.
It bubbled up as a controversy, which I believe it was not. I've written about it; I've talked about it with friends on this podcast. And so just tell me how that happened and how that went and, and I, I'm part Hungarian, so I appreciate the fact that you were, you know, helping the Hungarian language kind of live in a new way through Adrian Brody.
Yeah, that was an amazing project, an amazing team we had to work with and amazing voices we had to, to be working with. I, I don't support the word controversy because I mean, it was just created through the Oscars race. When people compete, so they use whatever they can to compete. And since it's became visible that the voice was used, the voice synthesis technology was used to do accents, but there was not enough context from this regional interview of the editor many wrong stories appeared.
Yeah. Essentially what we did, we engaged with the team, and Brady, the director, he wanted to make it perfect just. A very legit. Request from, yeah, a filmmaker. He wanted to make it perfect in terms of original sounding of Adrian Brody and Felicity Jones in Hungarian, and it was quite, quite a bit of work as well.
So, they recorded a lot in Hungarian and before that they worked for months with Accent coach. But as you know, Hungarian language is one of the hardest out there in the world. Yeah. Had to fix some tiny things, like particular wows or consonants in those words. Mm-hmm. And they used the technology to make it sound perfect, to do this very sophisticated post-production.
So Hungarian speaker wouldn't hear any accent when, when they listen to it. Mm-hmm. And I adore the, their approach. I mean, that's, that's the beauty of working with, with such creators. Again, the perfection is filmmaking, the vision of a director, the, the passion of the actors to get it right, and then using technology, embracing technology.
To help them even though they went through the effort with the training and, and, and voice coaching and, and dialect coaching. So, I, to me, you know, it was a non-issue. Certainly. And you, you mentioned that it came up in the. In the trades, you know, as everyone's competing and they're trying to dis their competitors, right?
And but yeah, so I applaud you and, and, and to me that's, that is just a great use of ai. I, you know, I, I think we're still a ways off from. An AI film competing with other films. But here are these tools. Reese Speech is one that seasoned filmmakers, editors. Producers, directors’ special effects artists, they can use these tools to help them in, in production.
And to me, what is wrong with that? And, and so I, I, I think there's starting to embrace it, and I think sometimes controversy bubbles up, you know. This technology and then people will be curious and dig into it. Have you found that, that after these controversies that attention spiked and maybe you got more requests to learn more from Hollywood and the film community?
I. Yeah, I would say so. So, we had, we had some spike in what we call traffic, but essentially, we've been working with almost all Hollywood studios by then. So, we, we've been talking to those people already being engaged in some projects that are under development right now, or even on space stage.
What's, what's cool about that, that we had a lot of requests for perfecting languages and accents. Mm-hmm. Which is, which I think is very important for the industry. cause when you make high quality content, it needs to be high quality in terms of in terms of the sound itself. Mm-hmm. And in many cases, you, you cannot make a person learn a language.
In, in a matter of, of like weeks or months, you cannot make a person to speak perfect accent of a particular language. And when you need several lines in the movie, you, you better use technology. cause first it's available, it's out there. Secondly, it serves the need of making it sound good. It was interesting in Mandalorian.
So, when it comes to our technology being used, we are always. Up for disclosing the fact of our technology to be used as early as possible in the process. And we encourage our clients to do so. But in some cases, their PR decision and creative decision is to keep it silent for some time. And the reason some of those do that is because we humans are very much biased.
So, if we knew that. Say Mark Hamill was using technology in Mandalorian. We would hear some things we would never hear without this knowledge. We would just imagine some things about the sound and what John Farrow did. He just kept silent. We saw that no one ever reacted to the voice. No one ever thought that the voice was manipulated somehow.
Mm-hmm. And then in the making off of the Mandalorian, he went out and told that's how we made it. We used three feature, we used it for training the models from 40 years old tapes. And that's exactly how the technology helped us accomplish this mission of bringing character back. Yeah, that's a great story, you know, behind the scenes story and you know, working with AU I'm sure it was fantastic.
He's such a visionary. So, I want to ask you two questions. If you could elaborate. You know, one is the ethics. And permissions licensing, those types of things. And then how that relates to music and ai. And if you can talk a little bit about vocal roots, which I think is built on your platform, which is doing some amazing things.
So, can you elaborate on that for a while? Yeah, for sure. And in terms of ethics, I wouldn't distinct any areas one from another. I mean, edX is, is the whole solid thing. It's, it applies to feature, it applies to animation, it applies to music and video games in the same way. And our logic is to be very conservative.
In regards tox. When we founded the company one of the goals, we put in front of us along with creating this amazing technology is not letting anyone. To use this technology for bad things, for fooling people to hear something someone never said. Stuff like that. So, we put very strict boundaries in terms of how the technology could be used.
And one of the main limitations is you cannot use three feature services and feature technology if permission from a voice owner is not in place. So, permission is a very starting point. And actually, the first question is. Team asks when we are in early stages of engagements with content makers permission is extremely important.
But along with that, we've been very cautious about how technology is highlighted in, in the project. What actually is being told, what is the story. And also, we cut off some of the use cases for the technology even with the permission. When we feel not comfortable about two things. First risk feature being associated with those things.
Secondly synthetic waste technology in general, being associated with those things, and those would be politics. I. Proper advertisements where misleading is happening stuff like that. So, we, we had always to dig deep into the project's essence because in, in some cases we had permissions, but then we found out that the person who gave the permission is in jail and they're doing a new album and stuff.
Album is okay. The, the content is okay, but then we have to go deeper and see what the person is in jail for. And when we see what the person is in jail for, we feel very uncomfortable to be associated with that. So that's, that's quite unusual structure of working in a startup and with a startup from the side of content makers.
But that's, one of the cornerstones at risk feature. So, first one is quality. We deliver the quality no one else does in the industry. Second one is trust. We are trusted not just by Disney, Sony and BCU Warner, you name them. We are all also trusted by those individuals by Mark Heel, by Adrian Brody, by, by those directors.
And this trust is essential component for this business to move on. The technology would be adopted, but adoption goes through the fear stage. And this fear stage is very much prolongated in time, but also, it's quite intense. cause the main asset those talents have been building through their life is their likeness.
And they are afraid of being de attached from their likeness, who's in control over their likeness. And this trust component is something that makes them comfortable. To engage with the technology, to explore the technology, to start doing some cool and amazing things the technology would allow them to do.
Vocal roots AI is one, one of the engagements and in particular it's in, in the music industry. And the idea is to bring back some, some of the hip hop legends voices. With all permission, with all the trust components with all due respect to the craft, to the talent to the community, to the listeners.
We start with some, some first voices. There is pipeline of the voices we, we've been working with this year. And it sounds exciting because, I mean, its somewhat historical related projects, which are quite important in my opinion, for humanity in general. We should pay more attention to our history.
Yeah. So, you worked with Frank Nitty on Vocal Roots and he's a legend in the West Coast rap. Snoop Dogg mentored badass, and that's one of the. You know, deceased rappers whose voice you resurrected. So did Frank come to you with an idea after he researched what would be the best platform and was, you know, that idea of trust a factor and him deciding to go with respeecher?
Yeah, I think so. It's been a while ago since we first engaged maybe even two years. I, I don't recall. So, it, it took a while for us to get to know each other, to see what's under the hoods of each other, to start doing some early tests and see how this engagement could go. But. Essentially there are two challenges first for us and for Frank to ensure that it's all ethical and respectful.
Secondly, is to ensure that it's deliverable in terms of the quality. So, we want this music to be high quality music, so you wouldn't, wouldn't even tell that it's synthesized. That's the goal of high-quality synthetic waste technology to be un from just recording music is. A little bit harder than doing just speech.
cause I mean, those were are different. You, you have wall on classical models, like synthetic speech models are not doing amazing job on music. But what's cool about your speech? We started doing some music stuff back in 20 21, 20 20, a while ago. And given that we. Exposed to tasks of covering quite wide emotional range from Hollywood clients, from video games, meaning having screaming, whispering, crying.
So, singing out there. We've been polishing our models towards performing really well in singing mode and raping modes is something that we do pretty. Pretty well. You, you might take, take a listen to one piece with it, with yellow, black, back in 2022 when he was making his tribute to aci, his close friend who passed away sometime before that.
And what Ella wanted to do, he wanted to recreate a song, wake Me Up. They wrote together; they recorded together in different languages. To make it multilingual. So, we created the model for Ellas’s voice, and we made five versions in different language of this this song. Then it was all joined with visual metaphysic.
One of the famous providers of visual fake. They made Ella Black move the lips in the exact way. How, how she, he should be moving those leaps for the foreign songs. He, he's been singing in foreign versions of the songs. Cool. Yeah. So, you speak about localization, you know the fact that musicians could now use this technology to, you know, go into other countries in the native language of that country and further sell and promote their music to new audiences.
Is kind of mind blowing in terms of market expansion. So, the, and along with that, they can engage with the audience in a different way, right? They can do marketing in their own voice, in the local language. They can invite people in a particular place where the concert is happening on a local radio station and a local language.
We start having those projects, but along with that, we are. Also creating some new experiences when you can basically communicate with a virtual musician. So, there is a model of real time text to speech. There is an LLM behind it, and you can have a conversation with musician in right at spot. So, it, it feels like, like a human conversation like we are having with you right now.
Yeah. And that's really cool. I think. These are great applications because, you know, music and audio, it, to me, it's kind of the low hanging fruit technology wise, just because it's an MP three at the end of the day. Kind of a lo-fi file. But then. You know, syncing it with an image likeness, that's the kind of the next level of technology and lip syncing has always been an, an issue because it just was never right.
Right. It is like, let's not sink what, what's going on. And, and so it seems like you've kind of conquered those challenges and expanded the market and you know, I. I, I'm kind of against, you know, or not really an advocate for robotics and how robots are going to take over, certainly musicians and stuff.
I've never seen a robot play Rachmaninoff. And when they do, then I'll be impressed. But until then, forget it. You know, so there's that performance aspect, but then there's the interaction with video, with hearing and seeing, and not necessarily in a live performance of instruments, let's say.
I, I think you're, you, you're seeming to thrive in that category, which I think is great because why not? You know you know, push, push the technology into a practical use case where people can believe they're interacting. With these artists in their own native language. I think that's really awesome.
So, tell me a little bit more about some of the more recent challenges, technology wise, ethics wise, industry wise. I know you've limited some of the. Business silos so that you can thrive and focus, and I think any startup needs to focus. So, I applaud you on that. But what, what are your new, newer challenges?
Yeah, I mean the technology itself is always a challenge because we constantly work on being ahead of this quality race, right? Right. So, we work on, authenticity of the sound. We work on nature, one of the sounds. We work on speaker identity. We work on the speed of those models. We work on the data requirements, so the models would require as less data as possible.
We want on the work, on the speed of conversion itself, making it real time and even faster than real time. So those are like constant research and development challenges that the team. Faces, and it's not an easy work. I mean it's like the work where you have like tons of hypotheses and you have to arrange them in terms of what's likely, what's not.
And then you try different stuff. You try to use your creativity and imagination to apply some learnings from other domain to make it work. And then like one of a hundred of your ideas. It's actually working out. And then you have to embed that into the product. So, like 99, 90 5% of what r and d is doing is going to trash bin just because they're testing those hypotheses.
And I think it's a challenge by itself to do that constantly to, to do that for, for years. We also see that there is huge spike in terms of attention to synthetic voice technology, and it's somewhat driven by what we call an ethical approach when you just let people create whatever voices they want, reproduce, whatever voices they want, and that's bad.
That's something we avoided from the very beginning, but it gives some virality to those providers and to those technologies. It in some cases it creates mess in the market. cause market needs to navigate again and again through those very logical and very simple to us ethical challenges, whether it's right or not.
cause I mean, they try to draw this gray area and distribute the responsibility for, for misuse that's happening. The industry itself has been somewhat slow to. Adopt those technologies because its somewhat fear driven, right? It takes time for the industry to navigate. And our work here is to keep showcasing those amazing users of the technology in order to help industry navigate.
cause the technology is neutral itself and, usage of the technology is being defined in by creators and by the audience, which actually wise accepts the technology derivatives. But it's been slow in terms of like the startup pace. It's, it took years. For the industry to get to the point where it's now, when it starts to utilize technology on different stages of production in pre-pro and pro post and amusement parks and contact centers in marketing and trailers and music, and interactive video games experience it. It took quite a bit of time and also it took quite a bit of time for talents to navigate the point where they are and they're still navigating. So, patience is one of the challenges We also face at risk feature because we have to be persistent and very patient with all due respect to this industry.
Yeah. I'm so glad you mentioned that the technology is kind of neutral, but it's the creative people who use that technology to take it to new levels and build new use cases. I think that is what elevates the technology and the perception. And you're using that as kind of a sales tool, really, right?
Because you you're constantly looking for new use cases, using your creative minds to imagine, right, what's possible. And I think that, i, I applaud that effort because I, I think with any new technology, it's always going to be the creative community that takes it to the next level. Exactly.
Otherwise, it's just a technology and it won't, it won't adv advance. It'll, it'll reach a peak at some point. Right. But then creatives will come along and say, well, no, I could use it for this and do this. That's never been done before. And, and that's thinking outta the box, right? And, and, and that's thinking beyond the kind of engineer mentality of, well, well, we're just building this technology to clone the voice.
That's it, that's all we have to do. It's like, well, I. Not really, because there's all these other co complexities around the use cases. Right? So, I applaud your efforts in being that creative spark. I, I call it creative centered ai. And I think that's you know, I come from, I am a designer, so I come from the design background.
And, you know, human centered design is certainly a driving force in how we connect with consumers. And I think creative centered AI is what will elevate the technology to reach new levels and, you know, continually, you know, blow people's minds with what's going on. And I appreciate certainly the ethical lens with which.
You're doing everything and that you do your due diligence on your clients that approach you. And it's so refreshing to have a company, a startup, a successful startup succeeds, you know, with that lens. So, I applaud your efforts. Thank you so much. But it's still somewhat unusual for the tech industry, for the startup industry.
But that comes from the essence of Hollywood, the essence of high-end content creators. cause there is. There are a lot of unusual for the technology things in the core of content creation, like respect, like craft like looking into ways how a tech could enhance a human in some of the things that are essential to humans, like acting, performing, and looking into ways how AI can.
And hence craft itself, the human craft, the creativity. Yeah, it's, it's very interesting domain and it's very cool to be to be deep into that and observing how, how the depth changes and influences the approach we used to take to filmmaking, to video game making, as well as how it. Being navigated by saying humans in order to use it in a very.
Yeah, I think this technology empowers people, empowers creatives and empowers processes. People create their own workflows within whatever industry they're in. I think I, I know you mentioned advertising is being something maybe you're staying away from, but I, I think how AI and video may not.
You know, I think AI films are a little further out, but AI and advertising and, and cutting commercials and creating fantastic and titillating imagery that's kind of non sequitur. I think Madison Avenue as we call it in the states here the advertising industry. I think AI's going to really kind of kick some butt and shake things up if not already.
Are your thought say that we, about advertising, you know, and, and applications there? Yeah. I wouldn't say that we are like staying away from advertisement. It's a legit industry that actually requires innovation because a lot of ads we see are. Relevant. They're not making sense. They're poor quality.
And this industry would still exist as long as things are, are being sold out there. Right. But our approach to advertising that we have to be sure that there is no misleading using synthetic likeness. So, we had rejected some cases where we had permission and there was famous voice, but they wanted to do, to advertise some medical products that are not FDA approved. Mm, okay. This is, this is a dark zone for us. We are not comfortable there. So we are, we are cautious with such cases when, when it comes to misleading, using synthetic voice technology. But it doesn't, I, I, I don't say that misleading does not happen in advertisement.
So, we, we just to be sure that our tech and synthetic voice tech is not being used there. So, we are. We are very cautious with the brand of synthetic voice in general. Yeah. Well, you mentioned synthetic and synthetic media is a really a brand-new category. And you know, ironically, synthetic data.
Is another aspect of that, maybe negative because it's kind of AI eating itself. But I'm fascinated with synthetic media. I think it's a viable business as long as you do it ethically like you're doing. But synthetic media as a category, you know, it's a new term. I, I don't hear a lot of people talking about or mentioning it as a category, but I, I believe it is a category and, and, and, and a nice way to counter you know, the deepfake technology, which is a negative, right?
And associated with, you know, ripping off, but synthetic me and I'm a brander, so I, words are important to me and. Synthetic media as a brand is a, is a, is a descriptor of what's going on, I think is accurate. And so how, how do you feel about that? And is that helping you in, in guiding client potential clients and eliminating some of the fear with the use of that term?
Hmm. First of all, I've been always against the term of deepfake to be used just because it has so much negative connotation itself. It's not fair to use the word deepfake for many amazing technologies out there, but unfortunately it became somewhat descriptive word in the business. So, I'm a fan of synthetic voice, synthetic visuals.
Boarding in terms of the category itself it's actually a challenge because nothing similar ever existed in my opinion, and that's the problem. cause when you do something cool in technology, you can always find a reference. You can say it's Uber or of something, it's Spotify or something. It's YouTube of something, it's Google of something, but.
Case synthetic likeness because the whole concept of likeness is being changed. Humans are. Being removed in terms of the boundaries they have. Those boundaries and humans are being removed of them knowing languages. They, they don't know being able to perform in the way they're not able to perform.
Being in many places at the same time, being always with the voice, they. With every version of the voice, they had through their lifespan. Being able just to perform in the voice of another person. That's this completely new concept when you are de attention something very human in nature from, from a human body itself.
And I would say that this category is, is imposing challenges. cause when you need to explain that to. Investors when you need to explain that in simple words to clients it's a, it could be uneasy, but the thing is, when you start digging deeper, you see that several domains are being very much influenced by the technology, the domain of copyright.
Original material data, the domain of likeness itself and the likeness strides, the domain of craft and creativity, the domain of acting. So, all those things have to be put together when you decide on the good applications of the technology and how exactly to embed technology in the, in the workflows, or even create some new workflows based on the technology.
And that's one of the reasons why those strikes, were so harsh over the last years, right? Yeah. Because it's uneasy to navigate. Well, yeah, it's uneasy to navigate because there isn't that simple metaphor. And but that is also an indicator, you know, you are in pioneering mode, and you've reached what I call your POI, your point of innovation and, and you as a technology point of innovation.
Point, innovate, point of innovation is when you, you know, you're at a point where you're in a category that is not easily defined. You're ruffling some feathers and you're pushing the boundaries in, in some aspect. And as an innovator and as a tech leader you need to. Thrive in that space.
Knowing, knowing you're in that space and then bring people along with you. Many people in technology, many kinds of Startup founders, they don't have a POI in their deck or their product offering. Or they don't, they don't know that they're in it or out of it or know what they need to do to achieve it.
So, it's, it's very important that, you know, any new technology advance creates discomfort. And so, as an innovator, you, you have to embrace that because you know you're in the right space because you want to do something new, right? You want to challenge the norm. And, and, and so these are terms that I've used and coined to, to help founders, you know, kind of realize their vision and that how to speak to potential investors because there is.
There isn't a common language sometimes when presenting an offer, and so you have to provide context in which people can absorb it and open their minds to the possibilities. So, these are just little techniques in terms I use, and I applaud those people who are in that space and have achieved that.
And, and I point it's like, look at these guys. You know, they, they did it. No one was doing it, and it took years. It wasn't, it wasn't, you know, an overnight success. And so, what are some of your upcoming projects you can talk about, and I know sometimes there's NDAs place where you can't, but maybe speak broadly maybe, or different, or categories you can speak to.
Yeah, I mean, we are, we, we cannot talk about the projects until the production is over, but in most of the cases until the release itself, we, we have quite a bit of work some cool work in the feature space. We do some very cool music work right now. We do quite a bit of work in TV space. We do platform engagement with studios when they start to use us for pre-production use cases for casting different voice characteristics in production to enhance their creative process.
We. Start putting technology in hospitality use cases when you can actually engage with real time synthetic voice as a visitor. Those are really cool cases. We keep marrying our technology with visual providers, so we are doing something called with holograms right now. 3D characters, 3D avatars.
It's interesting to get those, those different parts of the same technology to be merged together and provide the full experience. Most of the work is in high-end space, but what we recently released is a new piece of technology called Real-Time Text to Speech. It's not new to the market, but it's new for feature in terms of releasing it on a feature level of quality. We've been very picky about our releases because we are, we are so quality oriented, and we have not been releasing real-time technology or real-time text to speech technology for quite a while until we are satisfied. But now we are exactly at the moment where we are satisfied, and it could be massively integrated into products that require the voice component that require voice agents.
AI agents to speak that require some character voices to be part of the technology. And it's very important time. cause right now we are at the edge of the change that happened in the technology. cause a couple of years ago, our computer, our phone was just a tool. Just an instrument. Mm-hmm. Now it's all partner.
All, all has changed. We communicate with our computer, our phone in a different way. And what we see there that when we communicate with something that is human-like a partner, the most comfortable channel for us to do that is verbal channel of communication. That's how humans communicate actually.
Yeah. We are yet to see voice first interfaces. Those never happened in the industry because those voice first interfaces were created for people with. Vision, disabilities. Yeah. Not for general audience. And those are very exciting times, and we are happy to support those, this technology change in general.
We have some new pieces of technology on respiratory level of quality built by a trusted company in this space. Yeah. You know, the whole idea of, user interface design, and how do you interact with these technologies and how do you manipulate a voice with a tool? There's been some feeble attempts a lot of frustration, and so that's a great problem to solve, certainly in the.
Customer relations management space, right? Where you're talking with a an evolve chat bot, which is an agentic AI at this point. And, and that's a whole new category. So that's exciting. It's, I think it's an exciting evolution and again, kudos to you for recognizing. This interface that is in between the outcomes that you need and that needs to be prosumer, consumerized, however you want to state that.
But that's awesome. So, listen, we're getting to the end of our podcast, and you want to give a shout out to where people can reach you and, and your platforms, URL and stuff. Yeah, for sure. We are at www feature.com. You can see us on IMDB. But the best way to engage with our work is to go to cinema to see some of those amazing movies.
We are, we are part of. Awesome. Well, listen, thanks Alex, and thanks to all of you for tuning in and catch more of our real IQ sessions on your favorite podcast platforms. Please follow and smash that subscribe button. You can also follow us on TikTok, LinkedIn, and Blue Sky signing off. Thanks a lot. Thank you.