#554 Securing the AI Era: Alex Schlager on Why AI Agents Are the New Attack Surface
DEC 16, 202545 MIN
#554 Securing the AI Era: Alex Schlager on Why AI Agents Are the New Attack Surface
DEC 16, 202545 MIN
Description
<p>In this episode of <em>The CTO Show with Mehmet</em>, I’m joined by <strong>Alex Schlager</strong>, Founder and CEO of <strong>AIceberg</strong>, a company operating at the intersection of AI, cybersecurity, and explainability.</p><p><br></p><p>We dive deep into why <strong>AI agents fundamentally change enterprise risk</strong>, how shadow AI is spreading across organizations, and why monitoring black-box models with other black boxes is a dangerous mistake.</p><p><br></p><p>Alex explains how explainable machine learning can provide the observability, safety, and security enterprises desperately need as they adopt agentic AI at scale.</p><p><br></p><p>⸻</p><p><br></p><p><strong>👤 About the Guest</strong></p><p><br></p><p><strong>Alex Schlager</strong> is the Founder and CEO of <strong>AIceberg</strong>, a company focused on detection and response for AI-powered workflows, from LLM-based chatbots to complex multi-agent systems.</p><p><br></p><p>AIceberg’s mission is to secure enterprise AI adoption using <strong>fully explainable machine learning models</strong>, avoiding black-box-on-black-box monitoring approaches. Alex has deep expertise in AI explainability, agentic systems, and enterprise AI risk management.</p><p><br></p><p><a href="https://www.linkedin.com/in/alexschlager/">https://www.linkedin.com/in/alexschlager/</a></p><p><br></p><p>⸻</p><p><br></p><p><strong>🧠 Key Topics We Cover</strong></p><p> • Why AI agents create a new and expanding attack surface</p><p> • The rise of shadow AI across business functions</p><p> • Safety vs security in AI systems and why CISOs must now care about both</p><p> • How agentic AI amplifies risk through autonomy and tool access</p><p> • Explainable AI vs LLM-based guardrails</p><p> • Observability challenges in agent-based workflows</p><p> • Why traditional cybersecurity tools fall short in the AI era</p><p> • Governance, risk, and compliance for AI driven systems</p><p> • The future role of AI agents inside security teams</p><p><br></p><p>⸻</p><p><br></p><p><strong>📌 Episode Highlights & Timestamps</strong></p><p><br></p><p><br></p><p><strong>00:00</strong> – Introduction and welcome</p><p><strong>01:05</strong> – Alex Schlager’s background and the founding of AIceberg</p><p><strong>02:20</strong> – Why AI-powered workflows need new security models</p><p><strong>03:45</strong> – The danger of monitoring black boxes with black boxes</p><p><strong>05:10</strong> – Shadow AI and the loss of enterprise visibility</p><p><strong>07:30</strong> – Safety vs security in AI systems</p><p><strong>09:15</strong> – Real-world AI risks: hallucinations, data leaks, toxic outputs</p><p><strong>12:40</strong> – Why agentic AI massively expands the attack surface</p><p><strong>15:05</strong> – Privilege, identity, and agents acting on behalf of users</p><p><strong>18:00</strong> – How AIceberg provides observability and control</p><p><strong>21:30</strong> – Securing APIs, tools, and agent execution paths</p><p><strong>24:10</strong> – Data leakage, DLP, and public LLM usage</p><p><strong>27:20</strong> – Governance challenges for CISOs and enterprises</p><p><strong>30:15</strong> – AI adoption vs security trade-offs inside organizations</p><p><strong>33:40</strong> – Why observability is the first step to AI security</p><p><strong>36:10</strong> – The future of AI agents in cybersecurity teams</p><p><strong>40:30</strong> – Final thoughts and where to learn more</p><p><br></p><p>⸻</p><p><br></p><p><strong>🎯 What You’ll Learn</strong></p><p> • How AI agents differ from traditional software from a security perspective</p><p> • Why explainability is becoming critical for AI governance</p><p> • How enterprises can regain visibility over AI usage</p><p> • What CISOs should prioritize as agentic AI adoption accelerates</p><p> • Where AI security is heading in 2026 and beyond</p><p><br></p><p>⸻</p><p><br></p><p><strong>🔗 Resources Mentioned</strong></p><p> • <strong>AIceberg</strong>: <a href="https://aiceberg.ai">https://aiceberg.ai</a></p><p> • <strong>AIceberg Podcast – How Hard Can It Be? </strong><a href="https://howhardcanitbe.ai/"><strong>https://howhardcanitbe.ai/</strong></a></p>