For Humanity: An AI Risk Podcast
For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

The AI Risk Network

Overview
Episodes

Details

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com

Recent Episodes

The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75
DEC 6, 2025
The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75
<p>In this episode of For Humanity, John Sherman sits down with Congressman Bill Foster — the only PhD scientist in Congress, a former Fermilab physicist, and one of the few lawmakers deeply engaged with advanced AI risks. Together, they dive into a wide-ranging conversation about the accelerating capabilities of AI, the systemic vulnerabilities inside Congress, and why the next few years may determine the fate of our species.</p><p>Foster unpacks why AI risk mirrors nuclear risk in scale, how interpretability is collapsing as models evolve, why Congress is structurally incapable of responding fast enough, and how geopolitical pressures distort every conversation on safety. They also explore the looming financial bubble around AI, the coming energy crunch from massive data centers, and the emerging threat of anonymous encrypted compute — a pathway that could enable rogue actors or rogue AIs to operate undetected.</p><p>If you want a deeper understanding of how AI intersects with power, geopolitics, compute, regulation, and existential risk, this conversation is essential.</p><p>Together, they explore:</p><p>* • The real risks emerging from today’s AI systems — and what’s coming next</p><p>* Why Congress is unprepared for AGI-level threats</p><p>* How compute verification could become humanity’s safety net</p><p>* Why data centers may reshape energy, economics, and local politics</p><p>* How scientific literacy in government could redefine AI governance</p><p>👉 Follow More of Congressman Foster’s Work:</p><p>📺 Subscribe to <a target="_blank" href="https://www.youtube.com/@TheAIRiskNetwork">The AI Risk Network</a> for weekly conversations on how we can confront the AI extinction threat.</p><p>#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://theairisknetwork.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">theairisknetwork.substack.com</a>
play-circle icon
70 MIN
AI Risk, Superintelligence & The Fight Ahead — A Deep Dive with Liv Boeree | For Humanity #74
NOV 22, 2025
AI Risk, Superintelligence & The Fight Ahead — A Deep Dive with Liv Boeree | For Humanity #74
<p>In this episode of <em>For Humanity</em>, John sits down with <a target="_blank" href="https://substack.com/profile/919249-liv-boeree">Liv Boeree</a> — poker champion, systems thinker, and longtime AI risk advocate — for a candid conversation about where we truly stand in the race toward advanced AI. Liv breaks down why public understanding of superintelligence is so uneven, how misaligned incentives shape the entire ecosystem, and why issues like surveillance, culture, and gender dynamics matter more than people realize.</p><p>They explore the emotional realities of working on existential risk, the impact of doomscrolling, and how mindset and intuition keep people grounded in such turbulent times. The result is a clear, grounded, and surprisingly hopeful look at the future of technology, power, and responsibility. If you’re passionate about understanding AI’s real impacts (today and tomorrow), this is a must-watch.</p><p>Together, they explore:</p><p>* The real risks we face from AI — today and in the coming years</p><p>* Why public understanding of superintelligence is so fractured</p><p>* How incentives, competition, and culture misalign technology with human flourishing</p><p>* What poker teaches us about deception, risk, and reading motives</p><p>* The role of women, intuition, and “mama bear energy” in the AI safety movement</p><p>👉 Follow More of <a target="_blank" href="https://substack.com/profile/919249-liv-boeree">Liv Boeree</a>’s Work:</p><p> 📺 Subscribe to <a target="_blank" href="https://www.youtube.com/@TheAIRiskNetwork">The AI Risk Network</a> for weekly conversations on how we can confront the AI extinction threat.</p><p> #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://theairisknetwork.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">theairisknetwork.substack.com</a>
play-circle icon
77 MIN
AI Safety on the Frontlines | For Humanity #73
NOV 8, 2025
AI Safety on the Frontlines | For Humanity #73
<p>In this episode of <em>For Humanity</em>, host <strong>John Sherman</strong> speaks with <strong>Esben Kran</strong>, one of the leading figures in the <em>for-profit AI safety</em> movement, joining live from Ukraine — where he’s exploring the intersection of AI safety, autonomous drones, and the defense tech boom.</p><p> 🔎 They discuss:</p><p>* The rise of <strong>for-profit AI safety startups</strong> and why technology must lead regulation.</p><p>* How <strong>Ukraine’s drone industry</strong> became the frontline of autonomous warfare.</p><p>* What happens when <strong>AI gains control</strong> — and how we might still shut it down.</p><p>* The chilling concept of a global <strong>“AI kill chain”</strong> and what humanity must do now.</p><p>Esben also shares insights from companies like <strong>Lucid Computing</strong> and <strong>Workshop Labs</strong>, the growing global coordination challenges, and why the next AI safety breakthroughs may not come from labs in Berkeley — but from battlefields and builders abroad.</p><p>🔗 Subscribe for more conversations about AI risk, ethics, and the fight to build a safe future for humanity.</p><p>📺 <a target="_blank" href="http://www.youtube.com/@theairisknetwork">Watch more episodes</a></p><p>#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://theairisknetwork.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">theairisknetwork.substack.com</a>
play-circle icon
55 MIN
Stuart Russell: “AI CEO Told Me Chernobyl-Level AI Event Might Be Our Only Hope” | For Humanity #72
OCT 25, 2025
Stuart Russell: “AI CEO Told Me Chernobyl-Level AI Event Might Be Our Only Hope” | For Humanity #72
<p>Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.</p><p>Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.</p><p>They discuss:</p><p>* Why even AI insiders are losing faith in control</p><p>* What a “Chernobyl moment” could actually look like</p><p>* Why regulation isn’t anti-innovation — it’s survival</p><p>* The myth that America is “allergic” to AI rules</p><p>* How liability, accountability, and provable safety could still save us</p><p>* Whether we can ever truly coexist with a superintelligence</p><p>This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one. </p><p>🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street. </p><p>📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat. </p><p>#AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://theairisknetwork.substack.com?utm_medium=podcast&#38;utm_campaign=CTA_1">theairisknetwork.substack.com</a>
play-circle icon
92 MIN