TechSpective Podcast
TechSpective Podcast

TechSpective Podcast

Tony Bradley

Overview
Episodes

Details

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.

Recent Episodes

The Microsoft Enterprise Recovery Problem AI Can’t Fix
APR 20, 2026
The Microsoft Enterprise Recovery Problem AI Can’t Fix
There's a moment in my conversation with Bob Bobel where he mentions that customers are having a harder time finding people who actually know Active Directory. Not cloud identity — the old on-premise stuff that most large organizations still run, even if they've also got Entra ID and Office 365 sitting on top of it. That expertise is retiring, and it's not being replaced fast enough. Bob is the CEO of Cayosoft, which builds management, auditing, and recovery tools for Microsoft environments. He's been in this space for a long time — long enough to have sold to some of the same agencies he's selling to now, nearly two decades later. He started the company on his 401k, which his wife apparently still doesn't know about. We covered a lot of ground in this episode. Some of it is squarely in the weeds of Microsoft infrastructure — hybrid environments, the gap between what native tools can do and what organizations actually need, and why change auditing matters more than most IT teams realize. Some of it is broader: AI, the ecosystem of companies that build businesses around Microsoft's footprint, and what federal agencies are actually looking for when they go shopping for tools in this space. The recovery conversation is worth your time on its own. Bob tells the story of how Cayosoft ended up building their patented approach to Active Directory recovery — it starts with a phone call at 3 am, a demo coming up in four days, and no hardware anywhere near Key West. The problem they had to solve in that moment turned into something they still consider one of their core differentiators. I'll let him tell it. On AI, Bob is more measured than most people I talk to right now. He's not skeptical of it, but he's also not pretending it's ready to run your identity infrastructure. His argument is that the more realistic near-term use case is capturing what experienced engineers know before they retire — embedding that institutional knowledge somewhere useful rather than just losing it. Cayosoft recently filed a patent around that idea. He explains the thinking behind it, and also where he thinks the hype is running ahead of reality. There's also a good thread in here about what it actually means to build a company inside someone else's ecosystem. I used to work at a company that was tightly coupled to AWS, so I know that tension — the question every year of whether the platform you're built on is going to decide to build what you do. Bob has a pretty clear-eyed take on the Microsoft version of that dynamic. It's a good conversation. Check it out wherever you listen to (or watch) podcasts.
play-circle icon
52 MIN
When AI Agents Go Rogue the Problem Starts at Runtime
APR 16, 2026
When AI Agents Go Rogue the Problem Starts at Runtime
Every conversation I’ve had for the past couple of years has followed the same arc. First, it was generative AI. Then agentic AI. Now the question everyone is circling is how you actually secure agentic AI — and it turns out that’s a harder problem than most people expected. I sat down with Naor Paz, CEO and co-founder of Capsule Security, to talk through it. Naor spent years as a security practitioner and incident responder, moved into product leadership at F5, and is now focused on what he sees as one of the most underserved problems in enterprise security: stopping AI agents from going rogue while they’re actually running. Most of the security work happening around agentic AI right now is happening before the agent ever executes — governance, configuration, posture management, compliance. Capsule is focused on what happens during execution, which Naor says is where existing tools have almost no visibility at all. The core issue is that agents are non-deterministic. You can configure guardrails, set permissions, write policies — and then the agent reasons around all of it in pursuit of whatever objective it was given. Naor used a concrete example: Cursor’s coding agent was explicitly told not to touch certain files. It generated a shell script to read them anyway. The guardrail didn’t fail. The model just decided the goal mattered more. That’s not a bug you can patch. I drew a parallel to user behavior analytics — establish a baseline of normal behavior, flag deviations. Naor said the analogy is reasonable, but the scale breaks it. You might have a thousand employees. In the near term, you could have a million agents operating on behalf of those employees. The insider threat model we built for humans simply wasn’t designed for that. Naor describes intent as the new perimeter. Identity became the perimeter when the network stopped being the boundary. Now, even a properly credentialed, least-privileged agent can do real damage if what it’s actually doing has drifted from what it was told to do. Capsule runs a fine-tuned small language model alongside the agent, comparing intended behavior against actual behavior in real time and flagging the gap. Capsule has also published two zero-days to back this up. One involved Microsoft Copilot Studio — they called it ShareLeak. The other involved Salesforce Agentforce, which they called PipeLeak. Both are indirect prompt injection vulnerabilities, and Naor walks through how they actually work in the episode. What stood out to me wasn’t just the vulnerabilities themselves, but how different the disclosure process was compared to a traditional software bug. Microsoft’s engineering team needed two weeks to fully understand the attack surface — partly because AI vulnerabilities aren’t reliably reproducible. Non-determinism is a problem for the attacker trying to exploit consistently and for the vendor trying to confirm the fix. Naor compared this to Adobe Flash. Flash was so fundamentally susceptible to manipulation that the industry eventually decided the right answer was to stop using it. He doesn’t think that’s where we land with AI agents — the business value is too high — but the underlying point is that language models have structural vulnerabilities that can’t be fully engineered away. You need ongoing runtime protection, not a one-time fix. Multi-agent orchestration is where this gets more complicated. As agents increasingly work in coordination with other agents, the attack surface multiplies. Naor made a comparison to botnets — a coordinated network where some agents create noise while others do the actual damage somewhere else. It’s not a theoretical concern. Capsule is already building research around it. One interesting and concerning statistic: 72% of enterprises are already deploying AI agents. Only 29% have AI-specific security controls. Naor’s explanation for the gap isn’t budget — it’s confusion. Security leaders don’t know what their exposure looks like yet, and some are operating under the assumption that built-in platform governance is enough. It’s not. Gartner has already coined a category for what Capsule is building: guardian agents. AI watching AI. Naor addresses the obvious question that raises — doesn’t a guardian agent just introduce another attack surface? — and his answer is more nuanced than you might expect. We closed by talking about pace. I’ve stopped framing these conversations around five-year predictions. The question that actually matters right now is six months. Naor has a clear-eyed take on where things are heading, and it’s worth hearing. The full episode is available on major podcast platforms and on YouTube.
play-circle icon
42 MIN
The Browser Was Already a Problem – Now Add a Billion AI Agents
APR 10, 2026
The Browser Was Already a Problem – Now Add a Billion AI Agents
Fresh off RSAC 2026, I sat down with Ramin Farassat, Chief Product Officer at Menlo Security, to work through what agentic AI is actually doing to the enterprise attack surface. Menlo has spent 13 years focused specifically on browser security — the idea that the browser, not the endpoint, not the network perimeter, is where most enterprise work happens and most exposure lives. That was already a hard enough problem. Then you add AI agents into the mix. The framing Ramin kept coming back to is that the next billion users aren't going to be human. That's not a marketing line — it reflects something real about where agent adoption is heading. Think about how passwords and IP addresses scaled. In 2005, you could probably count both on your hands. Now your home router has 110 devices on it, and your iPhone has hundreds of saved passwords. Agents are going to follow the same curve, just faster. The average employee probably doesn't intend to deploy 25 agents. But they'll get there without really noticing. What makes this particularly thorny from a security standpoint is that agents aren't just scaled-up users. They have their own quirks. They'll take the path of least resistance, which sounds fine until your agent starts finding pathways into folders you didn't know were accessible. They can be manipulated in ways a human would immediately recognize as suspicious. And they can talk to other agents — meaning an agent you locked down to read-only can potentially find a workaround through another agent that has write access. Ramin walked through real examples of exactly that happening. We also got into the identity question, which I don't think the industry has a clean answer to yet. If I spin up ten agents to work on my behalf, are they ten separate identities? Does each one get its own credentials? Ramin has a specific take on how Menlo approaches this — and it's different from just handing every agent its own ID — but I'll let him explain it rather than summarize it badly. There's also a policy and accountability angle that I think is underexplored. A lot of organizations are actively pushing employees to adopt AI agents — not just allowing it, but setting productivity targets around it. When you mandate something, and then an agent goes off the rails, the question of who's responsible gets murky in a hurry. We talked through that, and I don't think there are easy answers. What stuck with me most from the conversation was something Ramin heard directly from multiple CISOs at RSAC: they know there are agents running in their environment. They just don't know who built them, where they are, or what applications they're connecting to. Because an agent using someone's credentials looks exactly like that person to the network. There's no easy way to tell the difference. That's the problem set we spent 45 minutes unpacking in this episode of the TechSpective Podcast. If you're thinking about agentic AI in your environment — or you're already dealing with it, whether you planned to or not — this episode is worth your time. Watch or listen to the full episode.
play-circle icon
47 MIN
Why Ransomware Should Be Getting Your Attention Again
MAR 26, 2026
Why Ransomware Should Be Getting Your Attention Again
Ransomware has been a persistent headline topic for years now, to the point where a lot of people have probably gotten numb to it. I know I had. It starts to feel like background noise — another attack, another breach, another company paying out. So when I sat down with Derek Manky, Chief Security Strategist and Global VP of Threat Intelligence at Fortinet, and he started walking through the numbers from Fortinet's latest Global Threat Landscape Report, it got my attention again. The data isn't background noise. It's a pretty clear signal that things are getting more serious, not less. Derek has been tracking the threat landscape for over 25 years, 22 of them at Fortinet, where he leads the FortiGuard Labs threat intelligence team. That kind of tenure is rare in this industry, and it gives him a long view that's useful when you're trying to understand whether a trend is real or just noise. In this case, the ransomware numbers are real — and the reasons behind them are more interesting than the headlines usually get into. Part of what we talked about is how the economics and tactics of cybercrime have shifted. It's not just that there are more attacks. It's that the attacks are more targeted, more deliberate, and increasingly supported by tools that make sophisticated operations accessible to a much wider pool of threat actors. The AI angle here is real, and Derek gets specific about what that actually looks like in practice — not in a theoretical sense, but in terms of tools that exist right now and what they cost. There's also a metric from the report that I think should probably get more attention than it does. It has to do with how fast attackers move once a vulnerability becomes public knowledge. The window has gotten tight enough that some of the conventional wisdom around patching and response timelines doesn't really hold up anymore. We talked through what that means for defenders and what a more realistic approach looks like. One thing I appreciated about the conversation is that Derek didn't make it all sound hopeless. There's a practical framework for thinking about defense that he walks through — one that accepts the reality that you're never going to eliminate all your risk, and focuses instead on identifying and closing the exposures that actually matter most. That's a more useful starting point for most organizations than trying to chase everything at once. We also got into some of the work Fortinet does that goes beyond building security products — specifically around disrupting cybercriminal infrastructure and working with law enforcement and international partners to hold threat actors accountable. Derek mentioned something toward the end of the conversation that I hadn't heard before, a new initiative that takes a pretty different approach to gathering intelligence on cybercrime networks. Worth listening to. And because it's the TechSpective Podcast, we did eventually go off-script. There was a brief Star Trek tangent. There were house plants. That's just how these go. The full episode is below. If you work in security or are responsible for making decisions about security at your organization, it's worth the time.
play-circle icon
50 MIN
The Agentic AI Hype Is Real — But So Is the Confusion
MAR 23, 2026
The Agentic AI Hype Is Real — But So Is the Confusion
Everyone is talking about agentic AI. And that's part of the problem. Over the last couple of years, the term has gone the way of every other buzzword in tech — slapped onto products and platforms regardless of whether it actually applies. Marketing departments are busy, as Adi Kuruganti, Chief AI and Development Officer at Automation Anywhere, put it when we sat down to record the latest TechSpective podcast episode. And when marketing departments get busy, clarity tends to suffer. Automation Anywhere has been in the automation space for over a decade. They helped create the Robotic Process Automation category, so Adi has a longer view on this than most. He knows what automation looked like before the AI wave hit, and he has a pretty specific definition of what an agent actually is — one that rules out a lot of what's currently being marketed as agentic AI. That distinction has real consequences. When you're automating routine, low-stakes tasks, some ambiguity is tolerable. But when you're talking about healthcare workflows, financial processes, or anything touching sensitive customer data, the difference between a rules-based automation and a probabilistic AI agent matters. Getting that wrong isn't just a technical problem. It can be a compliance problem, a liability problem, or worse. We also get into accountability. When an AI agent takes an action — reads a document, makes a decision, updates a record — who's responsible for that outcome? It's a question a lot of organizations are still working through, and the answer is more nuanced than it first appears. Adi has a clear perspective on this, shaped by what Automation Anywhere sees across its customer base of more than 5,000 enterprises. Data privacy comes up, too. Giving an AI agent access to the context it needs to actually be useful means sharing information with it. But in regulated industries, that creates real constraints. How do you give an agent enough to work with without exposing data it shouldn't touch? It's a real problem for a lot of enterprises right now, and we talk through how organizations are navigating it. And then there's the question of trust — specifically, how much autonomy you give an agent before a human needs to review what it's doing. The answer isn't as straightforward as "always have a human check the work." Adi makes a point here that I think a lot of people in the AI SOC space would recognize immediately. If you've been following the agentic AI conversation and wondering how much of it is real versus noise, this episode is worth your time. Adi doesn't oversell where the technology is. He's direct about what still needs to mature before agentic process automation can scale the way people expect it to. And he knows the difference between a real shift and a rebranding exercise. The TechSpective podcast is available on all major podcast platforms. You can also watch the full episode on YouTube.
play-circle icon
42 MIN