Future of Threat Intelligence
Future of Threat Intelligence

Future of Threat Intelligence

Team Cymru

Overview
Episodes

Details

Welcome to the Future of Threat Intelligence podcast, where we explore the transformative shift from reactive detection to proactive threat management. Join us as we engage with top cybersecurity leaders and practitioners, uncovering strategies that empower organizations to anticipate and neutralize threats before they strike. Each episode is packed with actionable insights, helping you stay ahead of the curve and prepare for the trends and technologies shaping the future.

Recent Episodes

Trend AI's Robert McArdle on Criminal Business Models Surviving Tech Revolutions
APR 23, 2026
Trend AI's Robert McArdle on Criminal Business Models Surviving Tech Revolutions
After 18 years tracking cybercriminal operations at Trend AI, Robert McArdle, Director of Cybercrime Research, has developed a framework for predicting how threat actors adopt new technology: the answer consistently comes down to economics, not capability. He breaks down three rules of thumb his team uses: criminals want an easy life, any new technology must beat the ROI of their current model, and cybercrime is evolutionary rather than revolutionary. Those rules explain why ransomware has actually slowed the adoption of new attack methods and why the lowering technical barrier for attackers creates an asymmetric burden on defenders, who must demonstrate value to an employer rather than simply make a profit.Robert goes deep on where agentic AI is headed for both offense and defense, including a sobering implication for law enforcement; as criminal operations become increasingly automated, arresting the principals may no longer disrupt the business. His team has already put this to work on the defensive side. Their internal agentic system ACER has discovered 210 zero-days in a matter of months. He also raises a specific concern that practitioners should take seriously: CTI reports containing detailed reverse-engineering write-ups and code samples are essentially training data for malicious LLM prompting, and the industry should reconsider what level of technical detail is actually necessary to publish alongside IOCs.Topics discussed:The three-rule framework for predicting criminal adoption of emerging technologyHow the lowering technical barrier for entry shifts the entire cybercriminal bell curve upward Why embedding AI directly into malware remains rare below 1% of observed cases, and the two structural reasons that limit adoption The shift toward jailbreaking non-Western LLMs as criminal operators anticipate that law enforcement coordination is effectively nonexistentHow agentic AI transforms criminal business models from linear service stacks to exponentially scalable operations The emerging law enforcement challenge when operations are ~75% autonomous, arrests no longer constitute meaningful disruption Why CTI publishing norms need to evolve, specifically how detailed code samples and reverse-engineering screenshots in APT reports can be fed directly into LLMs to accelerate malware developmentPractical defensive posture for shadow AI proliferation: treat AI-powered tools as untrusted software under existing vulnerability management frameworksKey Takeaways: When assessing whether adversaries will adopt a new technique or tool, evaluate it through three lenses: ease of operation, return on investment versus current methods, and evolutionary fit with existing business models.Before publishing detailed reverse-engineering write-ups, code samples, or pseudocode in APT reports, assess whether that level of detail serves defender use cases or primarily serves as a development accelerant for threat actors. Audit your organization's shadow AI exposure as a software risk problem, not an AI problem. Structure specialist agents to handle discrete tasks rather than relying on a single broad LLM. Pressure-test your law enforcement response playbook against autonomous criminal infrastructure. Evaluate your AI security tooling for hallucination risk in detection workflows.  Model romance scam and investment fraud at scale in your threat landscape. Monitor for jailbroken non-Western LLM wrappers in criminal marketplaces. Factor defender tooling complexity into hiring and onboarding benchmarks. Track zero-day discovery velocity as a benchmark for agentic security ROI. Listen to more episodes: Apple Spotify YouTubeWebsite
play-circle icon
40 MIN
Scott Scher on Why CTI Teams Forecast Instead of Predict
APR 9, 2026
Scott Scher on Why CTI Teams Forecast Instead of Predict
Scott Scher, Cyber Threat Intelligence Lead, makes a distinction that reframes how intel teams should think about their own value: they are forecasters, not predictors. That shift in framing has concrete consequences for how CTI programs justify themselves internally, and Scott argues that the most meaningful metric isn't alert volume or report count, but the decisions intel has actually influenced. Scott also addresses where he sees the threat landscape heading, and his read on ransomware cuts against how many teams are still oriented. He argues that encryption-focused ransomware has largely peaked in value for attackers; the real shift is toward pure data exfiltration. He also touches on AI in CTI with a grounded take; it’s useful for accelerating manual analyst tasks like data gathering and link analysis, but only if intelligence teams define how it gets used before the organization does it for them.Topics discussed:Why CTI teams operate in the forecasting space rather than the prediction spaceThe practical implications for how assessments are communicated to stakeholders and leadershipThe challenge of quantifying CTI value through decision-driven metrics rather than output volumeMapping each stakeholder's workflow outputs and the triggers that drive them, then injecting intelligence at the right point in that chainThe evolution of ransomware toward exfiltration-only models, and why this reframes the defensive priority from backup to data loss prevention How CTI teams can use strategic intelligence to drive organizational decisions on edge device hardening and third-party riskThe role of AI in intel workflows as a force multiplier for manual analyst tasks, and why teams need to define that use case proactivelyThe collective defense model emerging at the state and local government levelWhy making analytic assessments scientifically defensible is what separates credible CTI from noiseKey Takeaways: Reframe your team's value proposition around decisions influenced, not products delivered. Map each stakeholder's workflow before defining your intelligence requirements. Conduct monthly stakeholder cadences specifically to capture feedback on delivered products. Ask stakeholders about their biggest obstacles, not just their intel requirements. Reorient ransomware defensive priorities toward data loss prevention.Use sustained trend analysis to build strategic intelligence cases for resource allocation. Get ahead of how AI is used in your CTI workflows before organizational pressure defines it for you.Treat qualitative stakeholder feedback as a scientific input, not an afterthought. Document the reasoning behind every intelligence assessment, not just the conclusion. Pursue an interdisciplinary lens when building CTI programs and hiring. 
play-circle icon
45 MIN
You Can't Trust Your Zoom Call Anymore. Deepfakes, DPRK & the New Attack Surface
MAR 26, 2026
You Can't Trust Your Zoom Call Anymore. Deepfakes, DPRK & the New Attack Surface
Deepfakes have moved well past the uncanny valley and into active threat operations, and Tom Cross, Head of Threat Research at GetReal, has the client-side case studies to back it up. Tom explains how North Korean IT worker infiltration campaigns have transformed HR and video conferencing from administrative functions into active attack surface, albeit one that most security teams aren't monitoring, logging, or ingesting into their SIEM.Drawing on a long-running collaboration with a former West Point professor and intelligence officer, Tom also applies the military framework of tactical, operational, and strategic intelligence to cybersecurity, arguing that most CTI programs are really just lists of burned indicators. The actual value of IOCs, he contends, is retrospective: discovering you were communicating with a known-bad actor means you may still be compromised. He makes the case for connecting adversary intent models, red team findings, and vulnerability data into a unified predictive picture. YT Thumbnail title: Your Zoom Call Is an Attack SurfaceTopics discussed:How North Korean IT worker infiltration has converted HR processes and video conferencing into an active, unmonitored attack surfaceVoice-cloned peer impersonation via messaging apps, followed by deepfaked video calls and malware deliveryWhy deepfake audio attacks on IT help desk credential reset processes are among the most likely near-term vectorsBiometric indicators of compromise and the significant false-positive risks that distinguish them from traditional IP or domain IOCsHow the military intelligence framework of tactical, operational, and strategic analysis applies to CTI programsThe strategic importance of retrospective IOC analysis versus forward-looking ingestionWhy DPRK's financial motivation model expands their target set far beyond what traditional nation-state threat modeling would predictKey Takeaways: Ingest video conferencing logs into your SIEM.Audit your remote credential reset process for social engineering resistance.Map red team findings and vulnerability data to specific adversary profiles rather than treating them as a generic remediation backlog.Implement retrospective IOC analysis alongside forward-looking blocking.Treat DPRK's financial motivation as an equalizer when assessing APT exposure.Build threat intelligence at the strategic layer by modeling adversary intent and objectives, not just cataloging observed TTPs.Apply extra care to biometric IOC sharing.Monitor employee working-hour patterns against claimed time zones as a behavioral indicator of potential employment fraud.Extend IOC taxonomy to include multimedia and biometric formats.Listen to more episodes: Apple Spotify YouTubeWebsite
play-circle icon
42 MIN
Two Minds. One Reframe. A Shift That Won't Wait.
MAR 19, 2026
Two Minds. One Reframe. A Shift That Won't Wait.
Vincent Passaro, Engineering Manager at Stripe Security, didn't get there through a slide deck or a company mandate. He got there through a shower thought that followed a conversation with a friend, and it broke how he'd been thinking about building, leading, and even measuring his own team.The reframe was simple and did not start with "we're all going to be software developers. Rather, "we're going to be product owners." That single pivot changed everything downstream, including how he approached prototyping, how he set success criteria for agents, and how he coached his team out of chasing bugs and into defining outcomes.In this episode, Will and Vince trace both of their "pin drop" moments: the specific conversations that shifted their mental models, then try to articulate what that shift actually means for CTI analysts and security engineers working real problems today.They talk about what it felt like to stop asking "how do I wire this" and start asking "what does success look like," and how fast things moved once that happened. They're honest about what breaks, like the siloed tools that don't talk to each other, the governance vacuum that opens when every analyst is shipping products, and the dopamine trap of adding features instead of finishing work. And they're equally direct about what becomes possible when outcome velocity: not headcount or tooling budget, and what becomes the competitive edge.This isn't a conversation about AI hype. It's about what happens when two practitioners who've spent years operating the plumbing realize the plumbing has been commoditized and what that means for where human judgment actually matters now.If you've been waiting for the right moment to pay attention, this is probably the episode where you stop waiting.Topics Discussed"Product owner" vs. "developer" mindset and why it changes how analysts build toolingDefining outcome criteria upfront as the core discipline for AI-assisted developmentHow AI collapses experimentation costs and eliminates dev team dependencyAnalyst-owned toolkits and outcome velocity as a competitive edge for small teamsThe governance risk: product silos, duplicated tooling, and inconsistent standardsFT3 as an open-source framework built to lower the community contribution barrierWhy CISO/board resistance to AI on security grounds will backfireThreat actors are scaling the same way — analyst adaptation is the necessary responseKey Takeaways: The unlock isn't learning to code: it's learning to think backwards from the outcome. Define what success looks like, set the criteria the agent has to meet before it moves on, and stop micromanaging the implementation. That's the product owner shift.Slow down before you build. Spend more time in planning than in execution using deep research across multiple models, comparing outputs, stress-testing the concept before a single line gets written.Drop the subscription and treat the model like a teacher, not a tool. Start with a problem you already understand. Ask it to walk you from zero to fluent. It will tell you to stop thinking like a developer and start thinking like a product owner. If you have a backlog of problems you gave up on because they weren't staffable, go find them. The feasibility question that used to take months to answer now takes an afternoon. Start there.Before your next team planning cycle, map what everyone is building. The duplicate tools are already being written in parallel by people who don't know about each other. Get ahead of it now, because it only compounds.If you're involved in open-source threat intel frameworks, the contribution problem was never motivation, it was friction. The tooling gap is closable. Build the on-ramp and the community will use it.Listen to more episodes: Apple Spotify YouTubeWebsite
play-circle icon
42 MIN
TIG Risk Services' Duaine Labno on How Remote Hiring Became an Opening for Infiltration
MAR 12, 2026
TIG Risk Services' Duaine Labno on How Remote Hiring Became an Opening for Infiltration
What happens when a DPRK IT worker operation lands inside one of your clients, and the three-letter agency you call says they can't show up? Duaine Labno, Director of Special Investigations & Threat Intelligence at TIG Risk Services, walks through exactly that case: his team built a ruse to recover the compromised laptop, staged a physical handoff at corporate HQ, filmed the courier, ran his plates, and traced him to multiple properties. This produced the kind of ground-level intelligence the FBI told him they'd never seen before in a US-based DPRK case. Duaine explains why digital and physical investigations have to run in parallel from day one, not handed off sequentially, and what that looks like operationally when federal resources don't materialize. He also breaks down how post-COVID remote hiring processes that are speed-optimized gave adversaries a repeatable entry point, and why an untrained recruiter doing a soft document check is now a meaningful attack surface for corporate networks.YT Thumbnail title: Remote Hiring Broke Your Security PerimeterTopics discussed:How post-COVID remote hiring processes relaxed identity verification standards and created repeatable enterprise network entry points Running parallel digital and physical investigations simultaneously when tracking identity fraud and insider threatsUsing open-source intelligence and proprietary threat monitoring software to scan millions of data points for suspect behavioral patternsExecuting a live DPRK IT worker case using physical surveillance, a document ruse, and plate runs to identify a U.S.-based operatorWhy untrained recruiters conducting soft document checks have become a meaningful attack surface in corporate hiring pipelinesHow adversaries are weaponizing AI for voice alteration, deepfakes, and document manipulation to bypass hiring and KYC verification processesThe case for vetted, secure cross-industry intelligence sharing platforms to close gaps that individual organizational silos leave openWhere cyber threat intelligence trails end and physical investigation must pick up to produce actionable, court-ready evidenceKey Takeaways: Treat remote hiring pipelines as an active attack surface by pulling security, legal, and HR into the process.Train recruiters to recognize fraudulent identity documents as a first line of defense against adversarial infiltration of corporate networks.Run digital and physical investigations in parallel from the start rather than waiting for cyber analysis to conclude.Build contingency plans for federal non-response into any investigation involving foreign threat actors.Deploy threat monitoring software capable of scanning open-source data at scale to surface behavioral patterns and connections.Establish vetted, secure intelligence sharing relationships with peer organizations and law enforcement to close the visibility gaps.Pressure-test AI-assisted hiring tools against deepfake and voice alteration scenarios before deploying them.Listen to more episodes: Apple Spotify YouTubeWebsite
play-circle icon
30 MIN