<p><a href="https://www.linkedin.com/in/ryan-glynn/" target="_blank" rel="ugc noopener noreferrer">Ryan Glynn</a>, Staff Security Engineer at <a href="https://www.compass.com/" target="_blank" rel="ugc noopener noreferrer">Compass</a>, has a practical AI implementation strategy for security operations. His team built machine learning models that removed 95% of on-call burden from phishing triage by combining traditional ML techniques with LLM-powered semantic understanding. </p><p>He also explores where AI agents excel versus where deterministic approaches still win, why tuning detection rules beats prompt-engineering agents, and how to build company-specific models that solve your actual security problems rather than chasing vendor promises about autonomous SOCs.</p><p><strong>Topics discussed:</strong></p><ul><li>Language models excel at documentation and semantic understanding of log data for security analysis purposes</li><li>Using LLMs to create binary feature flags for machine learning models enables more flexible detection engineering</li><li>Agentic SOC platforms sometimes claim to analyze data they aren&#39;t actually querying accurately in practice</li><li>Tuning detection rules directly proves more reliable than trying to prompt-engineer agent analysis behavior</li><li>Intent classification in email workflows helps automate triage of forwarded and reported phishing attempts effectively</li><li>Custom ML models addressing company-specific burdens can achieve 95% reduction in analyst workload for targeted problems</li><li>Alert tagging systems with simple binary classifications enable better feedback loops for AI-assisted detection tuning</li><li>Context gathering costs in security make efficiency critical when deploying AI agents across diverse data sources</li><li>Query language complexity across SIEM platforms creates challenges for general-purpose LLM code generation capabilities</li><li>Explainable machine learning models remain essential for security decisions requiring human oversight and accountability</li></ul><p><strong>Listen to more episodes: </strong></p><p><a href="https://podcasts.apple.com/us/podcast/detection-at-scale/id1582584270" target="_blank" rel="ugc noopener noreferrer">Apple </a></p><p><a href="https://open.spotify.com/show/6xa9t5dty4eH0UXDQXIew9?si=1df5eac89b294b14" target="_blank" rel="ugc noopener noreferrer">Spotify </a></p><p><a href="https://youtube.com/playlist?list=PLjYWlPBgNuD4f-hPjTyq3iPC-nT64ckFr&feature=shared" target="_blank" rel="ugc noopener noreferrer">YouTube</a></p><p><a href="https://panther.com/resources/podcasts" target="_blank" rel="ugc noopener noreferrer">Website</a><br></p>

Detection at Scale

Panther Labs

Compass' Ryan Glynn on Why LLMs Shouldn't Make Security Decisions — But Should Power Them

JAN 27, 202641 MIN
Detection at Scale

Compass' Ryan Glynn on Why LLMs Shouldn't Make Security Decisions — But Should Power Them

JAN 27, 202641 MIN

Description

<p><a href="https://www.linkedin.com/in/ryan-glynn/" target="_blank" rel="ugc noopener noreferrer">Ryan Glynn</a>, Staff Security Engineer at <a href="https://www.compass.com/" target="_blank" rel="ugc noopener noreferrer">Compass</a>, has a practical AI implementation strategy for security operations. His team built machine learning models that removed 95% of on-call burden from phishing triage by combining traditional ML techniques with LLM-powered semantic understanding. </p><p>He also explores where AI agents excel versus where deterministic approaches still win, why tuning detection rules beats prompt-engineering agents, and how to build company-specific models that solve your actual security problems rather than chasing vendor promises about autonomous SOCs.</p><p><strong>Topics discussed:</strong></p><ul><li>Language models excel at documentation and semantic understanding of log data for security analysis purposes</li><li>Using LLMs to create binary feature flags for machine learning models enables more flexible detection engineering</li><li>Agentic SOC platforms sometimes claim to analyze data they aren&#39;t actually querying accurately in practice</li><li>Tuning detection rules directly proves more reliable than trying to prompt-engineer agent analysis behavior</li><li>Intent classification in email workflows helps automate triage of forwarded and reported phishing attempts effectively</li><li>Custom ML models addressing company-specific burdens can achieve 95% reduction in analyst workload for targeted problems</li><li>Alert tagging systems with simple binary classifications enable better feedback loops for AI-assisted detection tuning</li><li>Context gathering costs in security make efficiency critical when deploying AI agents across diverse data sources</li><li>Query language complexity across SIEM platforms creates challenges for general-purpose LLM code generation capabilities</li><li>Explainable machine learning models remain essential for security decisions requiring human oversight and accountability</li></ul><p><strong>Listen to more episodes: </strong></p><p><a href="https://podcasts.apple.com/us/podcast/detection-at-scale/id1582584270" target="_blank" rel="ugc noopener noreferrer">Apple </a></p><p><a href="https://open.spotify.com/show/6xa9t5dty4eH0UXDQXIew9?si=1df5eac89b294b14" target="_blank" rel="ugc noopener noreferrer">Spotify </a></p><p><a href="https://youtube.com/playlist?list=PLjYWlPBgNuD4f-hPjTyq3iPC-nT64ckFr&feature=shared" target="_blank" rel="ugc noopener noreferrer">YouTube</a></p><p><a href="https://panther.com/resources/podcasts" target="_blank" rel="ugc noopener noreferrer">Website</a><br></p>