Former Google Chief Decision Scientist Cassie Kozyrkov on AI, Decisions, and Human Responsibility
JAN 13, 202680 MIN
Former Google Chief Decision Scientist Cassie Kozyrkov on AI, Decisions, and Human Responsibility
JAN 13, 202680 MIN
Description
<p>What happens when one of the world’s foremost decision-making and AI ethics leaders steps into a room built around integrity-based human influence?</p><p>In this powerful episode of <em>Unblinded with Sean Callagy</em>, Sean sits down with <strong>Cassie Kozyrkov</strong>—former Chief Decision Scientist at Google and a global authority on AI, data, and decision intelligence—for a conversation that reframes how we think about <strong>choices, responsibility, and the future of AI</strong>.</p><p>From her unconventional upbringing in South Africa and early obsession with spreadsheets, to her groundbreaking work at the intersection of <strong>human judgment and machine intelligence</strong>, Cassie reveals why AI is not about replacing humans—but about <strong>amplifying human agency</strong>.</p><p>Together, Sean and Cassie dismantle the myth of “autonomous AI,” explore why <strong>decision-making is the most important skill of the future</strong>, and challenge leaders to become better <em>wishers</em> in a world where technology increasingly grants our wishes at scale.</p><p>This episode is a must-listen for founders, executives, technologists, and anyone navigating leadership in an AI-accelerated world.</p><p> </p><p>Episode Highlights</p><ul><li><p>Why <strong>decision-making</strong>, not intelligence, is the ultimate competitive advantage</p></li><li><p>How AI is fundamentally a <strong>human system shaped by human choices</strong></p></li><li><p>The danger of outsourcing judgment—and how to protect human agency</p></li><li><p>Why bad outcomes come from <em>unskilled wishers</em>, not bad technology</p></li><li><p>How personalization, language, and AI will reshape work, education, and society</p></li><li><p>The ethical responsibility leaders carry as AI scales human impact</p></li><li><p>Cassie’s vision for a future where AI elevates humanity instead of dulling it</p></li></ul><p><br /></p><p><strong>Memorable Quotes</strong></p><p>“AI is not autonomous. It is human decisions all the way through.”</p><p>“The real danger isn’t powerful technology—it’s unskilled wishers with powerful tools.”</p><p>“If information isn’t connected to action, it doesn’t matter.”</p><p>“AI should never replace human judgment. Judgment is not automatable.”</p><p>“Reach for more—but be prepared to choose wisely.”</p><p><br /></p><p><strong>Timestamps</strong></p><p><strong>00:00 Introduction – Decision-Making, AI & Human Responsibility</strong></p><p><strong>03:45 Cassie’s Background & Growing Up in South Africa</strong></p><p><strong>08:10 Early Fascination with Data, Logic & Systems</strong></p><p><strong>12:30 Why Decision-Making Is the Ultimate Life Skill</strong></p><p><strong>17:05 Education’s Failure to Teach How to Choose Well</strong></p><p><strong>22:10 What AI Really Is (And Why It’s Not Autonomous)</strong></p><p><strong>28:40 Human Judgment vs Machine Intelligence</strong></p><p><strong>34:15 The “Unskilled Wishers” Problem Explained</strong></p><p><strong>40:10 Ethics, Responsibility & Power at Scale</strong></p><p><strong>46:30 AI as a Multiplier of Human Intent</strong></p><p><strong>52:20 Personalization, Language & the Future of Work</strong></p><p><strong>58:10 Leadership in an AI-Accelerated World</strong></p><p><strong>1:03:45 Why Agency Must Stay Human</strong></p><p><strong>1:09:20 Reaching for More — Vision, Legacy & Purpose</strong></p><p><strong>1:15:10 Final Reflections, Takeaways & Closing</strong></p><p><br /></p><p><strong>Why You Should Listen</strong></p><p>If you’re using—or planning to use—AI in business, leadership, or life, this episode will fundamentally shift how you think about <strong>responsibility, ethics, and power</strong>. Cassie doesn’t just explain the future—she equips you to <strong>lead it wisely</strong>.</p>