TechSpective Podcast
TechSpective Podcast

TechSpective Podcast

Tony Bradley

Overview
Episodes

Details

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.

Recent Episodes

The Identity Problem No One Saw Coming—Until AI Exposed It
DEC 11, 2025
The Identity Problem No One Saw Coming—Until AI Exposed It
Every once in a while, a conversation forces you to stop and rethink something you thought you already understood. Recording this latest TechSpective Podcast episode with Semperis CEO Mickey Bresman did exactly that—and it has everything to do with how AI is quietly rewriting the rules of identity security. If you’ve been following the industry for a while, you know the story: hybrid environments are the norm, identity is the new perimeter, and permissions hygiene is the decades-old chore nobody has enough time—or patience—to do well. None of that is breaking news. What is new is what happens when you drop modern AI into the middle of that reality. We’re not talking about sci-fi leaps or theoretical risk models. We’re talking about something much more immediate: AI tools that can surface old data, forgotten data, and misconfigured access paths you didn’t even know existed. Years of “we’ll fix that later” suddenly become a living, breathing attack surface the moment AI starts connecting dots faster than any human ever could. Mickey and I unpack why this shift is so significant and why organizations often misunderstand the real implications. We also get into the emerging gray zone of agentic AI—systems that operate like users, make decisions like users, and introduce a whole new category of identity no one had to account for before. It’s an area where the guardrails are still being built, even as the tools accelerate. I won’t spoil the conversation here, because part of the fun is hearing how Mickey frames the problem—and the opportunities—through the lens of someone working directly with organizations grappling with this right now. Let’s just say the old assumptions don’t hold, and the path forward involves more than bolting AI onto existing processes. If you care about identity, security, or the rapidly approaching future where AI plays a central role in both offense and defense, this is a conversation worth your time. Check out the full episode here: And as always, stay tuned. At the pace things are evolving, this probably won’t be the last time we revisit the topic—and the next wave may hit sooner than any of us expect.
play-circle icon
43 MIN
Exploring the Future of Identity Security and Agentic AI
DEC 3, 2025
Exploring the Future of Identity Security and Agentic AI
Every once in a while, I end up in a conversation that hits at exactly the right moment—when the industry is shifting, the vocabulary is changing, and everyone is quietly circling the same questions. This new episode of the TechSpective Podcast is one of those. Art Poghosyan, CEO and co-founder of Britive, joined me on this episode of the TechSpective Podcast for a fluid and surprisingly energizing dive into where identity security meets agentic AI. If you’ve followed the podcast this year, you know the pattern: gen AI defines the early hype cycle, but 2025 belongs to agents. Not the fantasy version where they automate your whole life, but the real-world scenario where they reshape what “digital responsibility” even means. Art has more than two decades of identity and access management experience, which gives him a grounded way of thinking about the moment we’re in. As we start talking, the first big theme that emerges is how fast the definition of “identity” is expanding. Identity used to be about people—employees, contractors, admins—and the occasional service account someone documented at 4:59 p.m. on a Friday. Now? Agents complicate all of that. A non-human autonomous system with access to a SaaS platform or a data lake behaves a lot like a user, even if it isn’t one on paper. Treating it as “just software” is exactly how we recreate the same exposures that powered the breach headlines of the last decade. One of the threads we tug on is the question of trust—not the fuzzy philosophical kind, but trust as an operational decision. An agent making decisions on your behalf needs to be verified every time it touches something sensitive. You need visibility into what it’s doing, controls around how long it can do it, and a way to shut it down when it starts operating outside its lane. These aren’t hypotheticals anymore. They’re the next generation of identity security problems, and Art offers a sharp perspective on what modern tooling needs to look like to keep up. The conversation also wanders into the human side of this shift. Everyone loves to frame the future as “AI versus AI,” but the real tension right now sits in the messy handoff between human intent and autonomous execution. Most organizations are easing into agents the same way you learn to drive a car: one cautious tap of the brakes at a time. That slow acclimation matters as much as any new feature or model. And yes, without giving anything away, we do acknowledge the part people sometimes treat like an afterthought: attackers get the same toys. They’re using them already. Ignoring that reality doesn’t make it go away. What I appreciate about this episode is how it holds the middle ground. It’s not hand-wringing about a dystopian future, and it’s not an AI pep rally. It’s a pragmatic, curious look at a technology that’s maturing faster than the guardrails around it. Art brings a thoughtful, steady view of where identity security is heading and what happens when autonomous systems stop playing by human rules. If you’re trying to understand how agentic AI fits into your world—or how identity security has to evolve to keep pace—this is a conversation worth hearing. Watch the full episode on YouTube and see where the discussion takes your own thinking next.
play-circle icon
52 MIN
From Polymorphic Attacks to Deepfakes: The Shifting Threat Landscape
NOV 25, 2025
From Polymorphic Attacks to Deepfakes: The Shifting Threat Landscape
One thing I’ve learned after years of covering cybersecurity is that the “state of the threat landscape” rarely sits still long enough to fit neatly into a headline. Every time you think you’ve understood the latest trend, something shifts under your feet. That’s part of the fun—and part of the challenge. That dynamic energy is exactly why I invited Brad LaPorte onto the TechSpective Podcast for this latest episode. Brad has lived just about every angle of cybersecurity you can think of: military intelligence, consulting, analyst work at Gartner, and now CMO of Morphisec. He’s been in the room for many of the big transitions—tooling changes, strategic changes, and the increasingly blurry line between human-driven attacks and AI-driven ones. Our conversation went much deeper than a simple “state of ransomware” update. Ransomware itself has grown so far beyond the old definition that it feels strange to keep calling it that. The classic “encrypt everything and demand crypto” playbook isn’t what defines the modern threat. The real story now is how fast attackers adapt, how quickly new tactics spread, and how criminal groups behave more like full-fledged businesses than hobbyist hackers. We dig into all of that, but in a conversational way rather than a technical lecture. The thread that kept coming up is how small pieces of data—details that seem harmless on their own—can snowball into serious compromises when attackers start connecting the dots. Brad shared experiences that underscore how those tiny cracks get leveraged in ways most people never consider. It’s a reminder that cybersecurity is not only about the tools in place, but about the environment those tools live in. Another theme we circled around is the growing presence of AI in both defense and offense. AI-driven attacks aren’t a distant theory anymore. They’re active, adaptive, and often unsettling in how quickly they shift tactics mid-stream. Brad and I talked about what that means for defenders, why “preemptive” approaches are gaining traction, and how companies are trying to outpace threats that no longer behave like traditional malware at all. We also talked about the human side—something that doesn’t always make it into technical coverage. Cyberattacks aren’t abstract events. They’re personal. They exploit habits, patterns, and moments of distraction. Anyone who has ever clicked something out of instinct rather than scrutiny will relate to some of the scenarios we discuss. One thing I love about hosting this podcast is the space it creates for unscripted, honest discussion. Brad and I covered a lot—ransomware economics, polymorphic attacks, data exposure, the “funhouse mirror” problem of deception technologies, and even the strange comfort of knowing that pizza orders can still give away national secrets. Yes, really. And no, I’m not explaining it here; you’ll have to listen. If you work in cybersecurity, follow cybersecurity, or simply exist in a world shaped by cybersecurity, this episode is worth your time. It’s lively, candid, and packed with insight without requiring a glossary on the side. And if past experience is any guide, the things we talk about today may feel very different six months from now. That’s part of why these conversations matter. Give it a listen, subscribe if you enjoy it, and let me know what topics you want to hear explored next.
play-circle icon
53 MIN
Why AI Agents Need Guardrails — And Why Everyone’s Talking About It
NOV 20, 2025
Why AI Agents Need Guardrails — And Why Everyone’s Talking About It
The latest episode of the TechSpective Podcast dives straight into one of the most pressing questions in cybersecurity right now: what happens when the vast majority of identities in your environment aren’t human anymore? I sat down with Danny Brickman, co-founder and CEO of Oasis Security, for a wide-ranging conversation about the future of identity, the rise of agentic AI, and why enterprises may be sprinting into an AI-powered future without realizing just how much risk they’re accumulating along the way. Danny brings a background that blends offensive experience, deep identity expertise, and a pragmatic understanding of what security teams actually need—not just in theory, but in the messy reality of modern cloud environments. We covered a lot of ground. Some of it gets philosophical. Some of it gets unsettling. None of it is boring. A few themes we talk about (without giving the episode away): Identity is no longer about people. If you’re still thinking of identity as usernames and passwords, you’re roughly a decade behind. The overwhelming majority of identities in an enterprise belong to machines, services, workloads, keys, tokens—digital “keycards” with no owner attached. And that was before agentic AI entered the picture. AI agents behave like employees… just much faster. This creates opportunity. It also creates chaos if you don’t know what your agents can access, what they can do, or how quickly they can do it. The idea of an AI system accidentally wiping out a database is no longer hypothetical. Access is becoming the currency of the AI era. The value an agent delivers directly correlates to the access it’s granted. That tension—between capability and control—is now central to modern security strategy. Governance frameworks for AI agents aren’t optional. Danny and his team have been working with industry leaders to build a framework that defines what’s acceptable, what’s risky, and how enterprises can put real guardrails around AI systems. It may be the first time you’ve heard the term “agentic access management,” but it won’t be the last. We also dig into the AI bubble, the trust problem, and why ‘do your own research’ is becoming less meaningful in an AI-shaped world. These tangents got lively, but they all tie back to a core idea: when machines act on our behalf, we need to understand the implications. Why this episode matters AI is reshaping cybersecurity faster than any shift we’ve seen in years. But it’s also blurring lines—between humans and machines, autonomy and oversight, innovation and risk. We don't go out of our way to try to package neat answers. Instead, we raise the questions every security leader should be asking right now: What should agents be allowed to do? Who’s accountable when something goes wrong? How do we maintain trust in systems that move faster than we can supervise? And what does identity even mean in a world where humans are the minority? If you want a thoughtful, candid exploration of these issues—and a look at how one company is thinking about securing the future—give the episode a listen. The full episode is now live on the TechSpective Podcast. Let the conversation challenge your assumptions.
play-circle icon
55 MIN
From Alert Fatigue to Cyber Resilience: Rethinking the Future of the SOC with AI
NOV 7, 2025
From Alert Fatigue to Cyber Resilience: Rethinking the Future of the SOC with AI
Cybersecurity has a long memory—and an even longer list of recurring frustrations. Chief among them: alert fatigue. For as long as security teams have existed, they’ve been drowning in notifications, dashboards, and blinking red lights. Each new platform promises to separate signal from noise, and yet, years later, analysts are still buried under an avalanche of “critical” alerts that turn out to be anything but. In the latest episode of the TechSpective Podcast, I sat down with Raghu Nandakumara, VP of Industry Strategy at Illumio, to explore why this problem refuses to die—and whether the rise of agentic AI could finally change the equation. Raghu describes Illumio as a “breach containment company,” focused on limiting the damage when (not if) attackers break through. Their philosophy is simple but powerful: you can’t prevent every intrusion, but you can prevent the blast radius from spreading. That means reducing lateral movement risk—the ability for attackers to move freely once they’re inside a network—and building what he calls “true cyber resilience.” But our conversation quickly veered into a broader question about the human side of the SOC (Security Operations Center). Analysts are expected to triage thousands of alerts per day—one every 40 seconds on average. Most are false alarms. A few are genuine threats. The real challenge isn’t visibility; it’s focus. How do you know which alerts matter when every tool is screaming for your attention? That’s where AI comes in. And not just any AI—the kind that thinks and acts like a teammate. As we discussed, agentic AI represents a shift from passive pattern recognition to autonomous decision support. Instead of merely identifying potential threats, agentic systems can prioritize them, contextualize them, and even recommend (or execute) response actions. If that sounds like science fiction, it’s not. As Raghu points out, many of the prescriptive tasks assigned to Level 1 SOC analysts—correlating events, escalating cases, and following playbooks—are ideal for automation. An agentic system doesn’t get tired, doesn’t lose focus, and doesn’t fear missing an alert that might end up on the evening news. It simply does the job, at scale, with consistency. In the episode, we talked about how this approach might reshape the traditional SOC hierarchy. Rather than replacing humans, AI could specialize in specific “personas” that complement human expertise. You might have one agent trained as a first-tier analyst, another tuned to compliance monitoring, and another to executive-level risk analysis. Together, these agents form a collaborative mesh that filters, enriches, and interprets data before it ever hits a human’s desk. That’s not just a technology upgrade—it’s an operational shift. It redefines how teams think about detection, response, and ultimately resilience. Because resilience isn’t just about blocking attacks or patching vulnerabilities; it’s about ensuring the business continues to function even when something breaks. What struck me most about our discussion was how seamlessly this connects back to Illumio’s roots in segmentation. For years, the company has helped organizations visualize and contain movement within their environments. Now, by layering intelligent agents into that framework, they’re taking the next logical step: using automation not just to observe risk, but to act on it. We also talked about how the traditional boundaries between security disciplines—vulnerability management, threat detection, breach simulation—are beginning to blur. In a future shaped by agentic systems, those silos start to dissolve. Tools, agents, and human operators all contribute to a shared understanding of exposure, risk, and response. The result could be a more unified, adaptive form of cybersecurity—one built not on isolated alerts, but on intelligent, contextual awareness. That’s the promise of agentic AI. It’s not about replacing human judgment; it’s about amplifying it. And as Raghu notes, the sooner organizations embrace that shift, the closer we get to a world where “alert fatigue” is finally a thing of the past.
play-circle icon
52 MIN