Can We Trust AI? Safety, Ethics, and the Future of Technology
NOV 27, 202440 MIN
Can We Trust AI? Safety, Ethics, and the Future of Technology
NOV 27, 202440 MIN
Description
<p>In this episode of Agents of Tech, Stephen Horn and Autria Godfrey explore the rapidly evolving world of Artificial Intelligence and ask the pressing questions: Can we trust AI? Is it safe? AI is becoming deeply embedded in every aspect of our lives, from healthcare to transportation, but how do we ensure it aligns with ethical principles and remains trustworthy?<br />Featuring insights from:<br />Dr. Shyam Sundar, Director of the Center for Socially Responsible AI at Penn State, who discusses the role of ethics and trust in AI systems.<br />Dr. Duncan Eddy, Executive Director of the Stanford Center for AI Safety, who shares lessons from aerospace safety and how they apply to AI.<br />Join us as we examine the balance between technological advancement and safety, explore the role of regulation, and dive into the psychology of trust in AI. With perspectives on global AI trends, cultural differences in trust, and what the future holds, this is a must-watch for anyone curious about AI's impact on our society. </p>
<p><br /></p>
<p>#ArtificialIntelligence #AISafety #AITrust #EthicalAI #FutureOfAI #MachineLearning #AIFuture #AIInnovation #TechEthics #AgentsOfTech<br />00:00 - Welcome to Agents of Tech<br />Stephen Horn and Autria Godfrey introduce the episode, broadcasting from London and Washington, D.C., and pose today’s critical question: Can we trust AI?<br />02:15 - AI in Our Lives: Benefits and Risks<br />A discussion on how AI is rapidly transforming industries like healthcare, finance, education, and transportation. But with this integration come concerns about ethics, bias, and safety.<br />05:30 - Ethical Implications of AI<br />Exploring the challenges of making AI systems socially accountable and the ethical dilemmas arising from unchecked AI development.<br />10:00 - Conversation with Dr. Shyam Sundar<br />Dr. Sundar, Director of the Center for Socially Responsible AI at Penn State, explains how AI’s conversational nature impacts trust and how personalization can lead to both engagement and misplaced trust.<br />15:45 - Cultural Differences in AI Trust<br />A fascinating look at how different cultures approach and trust AI systems, highlighting the global nature of AI challenges.<br />20:00 - Dr. Duncan Eddy on AI Safety Frameworks<br />Dr. Eddy, Executive Director of the Stanford Center for AI Safety, draws parallels between aerospace safety systems and AI, offering insights into incremental safety improvements and regulation.<br />25:30 - Can Regulation Keep Up with AI?<br />A discussion on global efforts like the EU AI Safety Act and challenges in regulating both AI development and deployment, especially in high-risk applications.<br />30:15 - How to Verify AI Outputs<br />Examining methods like adaptive stress testing and formal verification to improve AI reliability and avoid catastrophic errors in fields such as medicine and finance.<br />35:00 - The Future of AI Safety and Trust<br />Closing thoughts on how AI safety research is racing to keep up with innovation, the importance of fostering a culture of safety, and ensuring trustworthiness as AI becomes ubiquitous.<br />38:00 - What’s Next on Agents of Tech?<br />A sneak peek at the next episode, where the focus will shift to deepfakes and cybersecurity with experts from NYU and the University of Buffalo. Trust in Artificial Intelligence<br /></p>