Data Poisoning to Hallucinations: The Many Risks of AI Part 1
MAR 9, 202634 MIN
Data Poisoning to Hallucinations: The Many Risks of AI Part 1
MAR 9, 202634 MIN
Description
Data Poisoning to Hallucinations: The Many Risks of AI | Part 1
In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems.
From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale.
But this episode doesn’t stop at definitions.
Shea and Jeffery also explore:
- The difference between direct vs. indirect prompt injection
- Whether AI hallucinations can ever truly be “solved”
- Why AI isn’t a truth machine
- Whether we’re using AI the wrong way
- What responsible validation should look like in enterprise AI deployment
As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational.
If you're working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave.
🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/ Check out the babl.ai website for more stuff on AI Governance and
Responsible AI!