In this episode of Agents of Tech, Stephen Horn and Autria Godfrey explore the rapidly evolving world of Artificial Intelligence and ask the pressing questions: Can we trust AI? Is it safe? AI is becoming deeply embedded in every aspect of our lives, from healthcare to transportation, but how do we ensure it aligns with ethical principles and remains trustworthy?
Featuring insights from:
Dr. Shyam Sundar, Director of the Center for Socially Responsible AI at Penn State, who discusses the role of ethics and trust in AI systems.
Dr. Duncan Eddy, Executive Director of the Stanford Center for AI Safety, who shares lessons from aerospace safety and how they apply to AI.
Join us as we examine the balance between technological advancement and safety, explore the role of regulation, and dive into the psychology of trust in AI. With perspectives on global AI trends, cultural differences in trust, and what the future holds, this is a must-watch for anyone curious about AI's impact on our society.
#ArtificialIntelligence #AISafety #AITrust #EthicalAI #FutureOfAI #MachineLearning #AIFuture #AIInnovation #TechEthics #AgentsOfTech
00:00 - Welcome to Agents of Tech
Stephen Horn and Autria Godfrey introduce the episode, broadcasting from London and Washington, D.C., and pose today’s critical question: Can we trust AI?
02:15 - AI in Our Lives: Benefits and Risks
A discussion on how AI is rapidly transforming industries like healthcare, finance, education, and transportation. But with this integration come concerns about ethics, bias, and safety.
05:30 - Ethical Implications of AI
Exploring the challenges of making AI systems socially accountable and the ethical dilemmas arising from unchecked AI development.
10:00 - Conversation with Dr. Shyam Sundar
Dr. Sundar, Director of the Center for Socially Responsible AI at Penn State, explains how AI’s conversational nature impacts trust and how personalization can lead to both engagement and misplaced trust.
15:45 - Cultural Differences in AI Trust
A fascinating look at how different cultures approach and trust AI systems, highlighting the global nature of AI challenges.
20:00 - Dr. Duncan Eddy on AI Safety Frameworks
Dr. Eddy, Executive Director of the Stanford Center for AI Safety, draws parallels between aerospace safety systems and AI, offering insights into incremental safety improvements and regulation.
25:30 - Can Regulation Keep Up with AI?
A discussion on global efforts like the EU AI Safety Act and challenges in regulating both AI development and deployment, especially in high-risk applications.
30:15 - How to Verify AI Outputs
Examining methods like adaptive stress testing and formal verification to improve AI reliability and avoid catastrophic errors in fields such as medicine and finance.
35:00 - The Future of AI Safety and Trust
Closing thoughts on how AI safety research is racing to keep up with innovation, the importance of fostering a culture of safety, and ensuring trustworthiness as AI becomes ubiquitous.
38:00 - What’s Next on Agents of Tech?
A sneak peek at the next episode, where the focus will shift to deepfakes and cybersecurity with experts from NYU and the University of Buffalo. Trust in Artificial Intelligence