Where is AI headed? A conversation with a philosopher and an economist
NOV 24, 202547 MIN
Where is AI headed? A conversation with a philosopher and an economist
NOV 24, 202547 MIN
Description
<p><em>Please </em><strong><em>like</em></strong><em>, </em><strong><em>share</em></strong><em>, </em><strong><em>comment</em></strong><em>, and </em><strong><em>subscribe</em></strong><em>. It helps grow the newsletter and podcast without a financial contribution on your part. Anything is very much appreciated. And thank you, as always, for reading and listening.</em></p><p><strong>About the Author</strong></p><p><em>Jimmy Alfonso Licon is a philosophy professor at </em><strong><em>Arizona State University</em></strong><em> working on ignorance, ethics, cooperation and God. Before that, he taught at </em><strong><em>University of Maryland</em></strong><em>, </em><strong><em>Georgetown</em></strong><em>, and </em><strong><em>Towson University</em></strong><em>. He loves classic rock and Western, movies, and combat sports. He lives with his wife, a prosecutor, and family at the foot of the Superstition Mountains. He also abides.</em></p><p>I had anxieties about AI and the future. So I decided to sit down with <a target="_blank" href="https://substack.com/profile/35728647-cyril-hedoin"><strong>Cyril Hédoin</strong></a> of <a target="_blank" href="https://cyrilhedoin.substack.com/"><strong>The Archimedean Point</strong></a> to hash out our thoughts together. </p><p>Talking with Cyril, I kept coming back to two linked worries: <strong><em>displacement and disempowerment.</em></strong> He traced his path from institutional economics into philosophy and admitted the same professional anxiety: AI doing more and more of the work we once took to <em>be distinctly human</em>. Neither of us thinks anyone can predict the labor-market fallout. The historical record makes forecasts laughable. But he’s right that whoever owns the AI infrastructure will hold enormous economic power, and that is a shift worth taking seriously.</p><p>Cyril’s worry about “uniformization” struck me. If people increasingly rely on broadly similar models for writing, thinking, and making decisions, <em>the range of genuine variation shrinks</em>. Because these systems are trained to be agreeable, even sycophantic, we risk reinforcing the worst aspects of our epistemic bubbles.</p><p>We ended on the personal terrain: loneliness, synthetic intimacy, and the temptation to treat AI as a partner or companion. I don’t think this becomes the norm soon, but the cultural pressures are obvious. <em>It feels like relational junk food—immediately gratifying, ultimately hollow.</em> Yet there is a genuinely hopeful angle too. If used well, AI might revive a kind of <a target="_blank" href="https://jimmyalfonsolicon.substack.com/p/synthetic-socrates-teaching-assistant"><strong>synthetic Socratic method</strong></a>—an always-on dialectical partner that sharpens arguments rather than dulls them. The real question is whether we use the tool without quietly surrendering ourselves to it.</p><p><p><em>Please </em><strong><em>like</em></strong><em>, </em><strong><em>share</em></strong><em>, </em><strong><em>comment</em></strong><em>, and </em><strong><em>subscribe</em></strong><em>. It helps grow the newsletter and podcast without a financial contribution on your part. Anything is very much appreciated. And thank you, as always, for reading and listening.</em></p></p><p></p><p></p><p></p><p></p><p></p><p></p> <br/><br/>Get full access to Uncommon Wisdom at <a href="https://jimmyalfonsolicon.substack.com/subscribe?utm_medium=podcast&utm_campaign=CTA_4">jimmyalfonsolicon.substack.com/subscribe</a>