<p>William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.</p><p><strong>LINKS:</strong><br>- Better Futures Research Series: <a href="https://www.forethought.org/research/better-futures">https://www.forethought.org/research/better-futures</a><br>- William MacAskill Forethought Profile: <a href="https://www.forethought.org/people/william-macaskill">https://www.forethought.org/people/william-macaskill</a></p><p><strong>CHAPTERS:</strong></p><p>(00:00) Episode Preview</p><p>(01:03) Improving The Future's Quality</p><p>(09:58) Moral Errors and AI Rights</p><p>(18:24) AI's Impact on Thinking</p><p>(27:17) Utopias and Population Ethics</p><p>(36:41) The Danger of Moral Lock-in</p><p>(44:38) Deals with Misaligned AI</p><p>(57:25) AI and Moral Trade</p><p>(01:08:21) Improving AI Ethical Reasoning</p><p>(01:16:05) The Risk of Path Dependence</p><p>(01:27:41) Avoiding Future Lock-in</p><p>(01:36:22) The Urgency of Space Governance</p><p>(01:46:19) A Future Research Agenda</p><p>(01:57:36) Is Intelligence a Good Bet?</p><p><strong>PRODUCED BY:</strong></p><p><a href="https://aipodcast.ing">https://aipodcast.ing</a></p><p><strong>SOCIAL LINKS:</strong></p><p>Website: <a href="https://podcast.futureoflife.org">https://podcast.futureoflife.org</a></p><p>Twitter (FLI): <a href="https://x.com/FLI_org">https://x.com/FLI_org</a></p><p>Twitter (Gus): <a href="https://x.com/gusdocker">https://x.com/gusdocker</a></p><p>LinkedIn: <a href="https://www.linkedin.com/company/future-of-life-institute/">https://www.linkedin.com/company/future-of-life-institute/</a></p><p>YouTube: <a href="https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/">https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/</a></p><p>Apple: <a href="https://geo.itunes.apple.com/us/podcast/id1170991978">https://geo.itunes.apple.com/us/podcast/id1170991978</a></p><p>Spotify: <a href="https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP">https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP</a></p>

Future of Life Institute Podcast

Future of Life Institute

We're Not Ready for AGI (with Will MacAskill)

NOV 14, 2025123 MIN
Future of Life Institute Podcast

We're Not Ready for AGI (with Will MacAskill)

NOV 14, 2025123 MIN

Description

William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.LINKS:- Better Futures Research Series: https://www.forethought.org/research/better-futures- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskillCHAPTERS:(00:00) Episode Preview(01:03) Improving The Future's Quality(09:58) Moral Errors and AI Rights(18:24) AI's Impact on Thinking(27:17) Utopias and Population Ethics(36:41) The Danger of Moral Lock-in(44:38) Deals with Misaligned AI(57:25) AI and Moral Trade(01:08:21) Improving AI Ethical Reasoning(01:16:05) The Risk of Path Dependence(01:27:41) Avoiding Future Lock-in(01:36:22) The Urgency of Space Governance(01:46:19) A Future Research Agenda(01:57:36) Is Intelligence a Good Bet?PRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP