E191 - Roman Yampolskiy: The Man Who Proved We Can't Control AI (And What That Means for Humanity)
Dr. Roman Yampolskiy joins me to explore one of the most urgent and uncomfortable questions of our time: what happens when we create intelligence that surpasses our own? We unpack the difference between the AI tools we use today and the emergence of artificial general intelligence, and why the transition from narrow systems to self-improving intelligence may mark a point where human control is no longer possible. Roman shares why even the people building these systems do not fully understand how they work, and why that gap in understanding becomes exponentially more dangerous as capabilities increase.In this conversation, we explore the limits of control, prediction, and safety in a world where intelligence can recursively improve itself beyond human comprehension. Roman lays out why the problem of AI alignment may be fundamentally unsolvable, what timelines experts are realistically considering, and why even a single mistake at that level could have irreversible consequences. This episode invites a deeper reflection on what we are creating, what we assume we can control, and whether humanity is prepared for the intelligence it is bringing into existence.BiOptimizers - Best magnesium to enhance your sleephttp://bioptimizers.com/knowthyselfUse code KNOWTHYSELF for 15% off at checkoutBASED Body WorksUse code KNOWTHYSELF for a free toiletry bag when buying a set!https://www.basedbodyworks.comAndrés Book Recs: https://www.knowthyselfpodcast.com/book-list___________00:00 Intro01:25 What Is AGI and Why Should We Be Scared?05:17 Roman's Journey: From Optimism to Impossibility09:07 The High Risk, Zero Reward Equation13:01 Why Superintelligence Is Uncontrollable, Unexplainable, and Unverifiable18:00 How Long Do We Have? The AGI Timeline21:24 How Superintelligence Could Actually Kill Us23:28 Are We Living in a Simulation?28:21 Can AI Become Conscious?31:28 Ad: BiOptimizers32:41 The Possible Timelines: Terminator, the Matrix, or the Zoo42:24 I-Risk, X-Risk, and S-Risk: Three Ways It Goes Wrong46:31 The Human Meaning Crisis: Jobs, Purpose, and What's Left49:02 Ad: Based Bodyworks50:20 What Empowers Us as Individuals Right Now59:37 The Race to Doom: Who's Building It and Why They Won't Stop1:07:41 Can AI Be Conscious — and Does It Already Have Internal Experiences?1:12:41 Hacking the Simulation: Quantum, DMT, and Escaping the Code1:18:30 Simulation Theory, Religion, and the Same Ancient Map1:29:34 The Deal Roman Would Offer Altman, Dario, and Elon1:39:44 What Is Humor? A Computer Scientist's Theory1:43:03 What Comes After: Singularity, Death, and Knowing Thyself___________Episode Resources: https://www.romanyampolskiy.com/https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626Xhttps://www.instagram.com/andreduqum/https://www.instagram.com/knowthyself/https://www.youtube.com/@knowthyselfpodcasthttps://www.knowthyselfpodcast.comListen to the show:Spotify: https://spoti.fi/4bZMq9lApple: https://apple.co/4iATICX