Mike McCormick: AI Acceleration vs Risks, Funding Global Resilience, AGI scenarios, U.S. vs China
OCT 9, 2025102 MIN
Mike McCormick: AI Acceleration vs Risks, Funding Global Resilience, AGI scenarios, U.S. vs China
OCT 9, 2025102 MIN
Description
<p>Join Tommy Shaughnessy as he speaks with Mike McCormick, founder of Halcyon, about the urgent intersection of AI acceleration and safety. Mike shares his path from venture capital to launching a hybrid nonprofit–fund model focused on securing advanced AI systems. They dive into mechanistic interpretability, global competition for AGI, and what a safe superintelligence future could look like. Can we build superintelligence safely? How do we balance innovation with existential risk? And what happens to humanity when AGI arrives?</p><p><br></p><p>Halcyon Futures: https://halcyonfutures.org</p><p><br></p><p><br></p><p>🎯 Key Highlights</p><p><br></p><p>▸ Leaving VC to focus entirely on AI safety and security</p><p>▸ Why Halcyon flipped the model: nonprofit first, fund second</p><p>▸ Multi-layered “defense in depth” approach to AI biosecurity & cyber risk</p><p>▸ The acceleration vs. safety debate — finding middle ground</p><p>▸ Good Fire case: career grants into interpretability research</p><p>▸ The 2×2 dilemma — speed vs. slowdown, centralization vs. decentralization</p><p>▸ U.S.–China dynamics and fast takeoff scenarios</p><p>▸ AI underwriting: how insurance can drive safety standards</p><p>▸ Founder-market fit and mission orientation in AI startups</p><p>▸ Risk, diffusion, and the uncertain path to AGI</p><p><br></p><p><br></p><p>💡 Subscribe for more crypto & infrastructure insights! 🔔</p><p><br></p><p><br></p><p>🧠 Follow the Alpha</p><p><br></p><p>▸ Mike's Twitter: @MikeMcCormick_</p><p>▸ Halcyon's Twitter: @HalcyonFutures</p><p><br></p><p><br></p><p>🔗 Connect with Delphi</p><p><br></p><p>🌐 Portal: https://delphidigital.io/</p><p>🐦 Twitter: https://x.com/delphi_digital</p><p>💼 LinkedIn: https://www.linkedin.com/company/delphi-digital/</p><p><br></p><p><br></p><p>🎧 Listen on</p><p><br></p><p>Spotify: https://open.spotify.com/show/62PR1RigLG2YN5Pelq6UY9?si=18ac7ccf36ab4753&nd=1&dlsi=50105fd66e6c4124</p><p>Apple Podcasts: https://podcasts.apple.com/us/podcast/the-delphi-podcast/id1438148082</p><p>Youtube: https://www.youtube.com/channel/UC9Yy99ZlQIX9-PdG_xHj43Q</p><p><br></p><p><br></p><p>Timestamps</p><p><br></p><p>00:00 — Mike’s background & pivot to AI safety</p><p>03:00 — The realization: AGI could change everything</p><p>05:00 — Why VC wasn’t enough to solve the problem</p><p>06:00 — Halcyon’s hybrid model and early mission</p><p>08:00 — AI security concerns: misuse, bio, and control</p><p>12:00 — Defense in depth: pre-training → deployment</p><p>15:00 — The creativity vs. restriction trade-off</p><p>17:30 — Pause AI vs. Build Baby Build</p><p>20:00 — Speed vs. centralization: the 2×2 framework</p><p>24:00 — Good Fire: career grants & interpretability</p><p>27:00 — Writing to neurons: alignment and insight</p><p>30:00 — How insurance markets can enforce safety</p><p>36:00 — Mission-driven founders & conviction filters</p><p>44:00 — Geopolitical race: U.S., China, and compute</p><p>50:00 — Diffusion limits, adoption, and energy costs</p><p>57:00 — Mass unemployment and meaning after AGI</p><p>01:05:00 — What “winning” AGI means for humanity</p><p>01:12:00 — Critical thinking, sycophantic AI, and engagement traps</p><p>01:20:00 — UBI, adaptation, and new work paradigms</p><p>01:30:00 — Three AGI futures: scale, shift, or stall</p><p>01:36:00 — 20% catastrophic risk & asteroid analogy</p><p>01:40:00 — Final message: talent is upstream of everything</p><p><br></p><p><br></p><p>Disclaimer</p><p><br></p><p>This podcast is strictly informational and educational and is not investment advice or a solicitation to buy or sell any tokens or securities or to make any financial decisions. Do not trade or invest in any project, tokens, or securities based upon this podcast episode. The host and members at Delphi Ventures may personally own tokens or art that are mentioned on the podcast.</p>