Voice AI’s Big Moment: Why Everything Is Changing Now (ft. Neil Zeghidour, Gradium AI)

FEB 19, 202682 MIN
The MAD Podcast with Matt Turck

Voice AI’s Big Moment: Why Everything Is Changing Now (ft. Neil Zeghidour, Gradium AI)

FEB 19, 202682 MIN

Description

<p>Voice used to be AI’s forgotten modality — awkward, slow, and fragile. Now it’s everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.</p><p>We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.</p><p>Neil breaks down today’s dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it’s popular: it’s modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.</p><p>We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.</p><p>Finally, we tackle voice cloning: where it’s genuinely useful, what it means for deepfakes and privacy, and why watermarking isn’t a silver bullet.</p><p>If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.</p><p><br></p><p>Neil Zeghidour</p><p>LinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/</p><p>X/Twitter - https://x.com/neilzegh</p><p><br></p><p>Gradium</p><p>Website - https://gradium.ai</p><p>X/Twitter - https://x.com/GradiumAI</p><p><br></p><p>Matt Turck (Managing Director)</p><p>Blog - https://mattturck.com</p><p>LinkedIn - https://www.linkedin.com/in/turck/</p><p>X/Twitter - https://twitter.com/mattturck</p><p><br></p><p>FirstMark</p><p>Website - https://firstmark.com</p><p>X/Twitter - https://twitter.com/FirstMarkCap</p><p><br></p><p>(00:00) Intro</p><p>(01:21) Voice AI’s big moment — and why we’re still early</p><p>(03:34) Why voice lagged behind text/image/video</p><p>(06:06) The convergence era: transformers for every modality</p><p>(07:40) Beyond Her: always-on assistants, wake words, voice-first devices</p><p>(11:01) Voice vs text: where voice fits (even for coding)</p><p>(12:56) Neil’s origin story: from finance to machine learning</p><p>(18:35) Neural codecs (SoundStream): compression as the unlock</p><p>(22:30) Kyutai: open research, small elite teams, moving fast</p><p>(31:32) Why big labs haven’t “won” voice AI4</p><p>(34:01) On-device voice: where it works, why compact models matter</p><p>(46:37) The last mile: real-world robustness, pronunciation, uptime</p><p>(41:35) Benchmarking voice: why metrics fail, how they actually test</p><p>(47:03) Cascades vs speech-to-speech: trade-offs + what’s next</p><p>(54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos</p><p>(1:00:50) New languages + dialects: what transfers, what doesn’t</p><p>(1:02:54 Hardware &amp; compute: why voice isn’t a 10,000-GPU game</p><p>(1:07:27) What data do you need to train voice models?</p><p>(1:09:02) Deepfakes + privacy: why watermarking isn’t a solution</p><p>(1:12:30) Voice + vision: multimodality, screen awareness, video+audio</p><p>(1:14:43) Voice cloning vs voice design: where the market goes</p><p>(1:16:32) Paris/Europe AI: talent density, underdog energy, what’s next</p><p><br></p>