TechSpective Podcast
TechSpective Podcast

TechSpective Podcast

Tony Bradley

Overview
Episodes

Details

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.

Recent Episodes

Who Do You Trust Online—And Why?
FEB 19, 2026
Who Do You Trust Online—And Why?
Trust on the internet used to be a fairly simple calculation. You looked for familiar names, recognizable brands, maybe a blue checkmark, and you made a judgment call. Today, that math often fails. AI has changed the game. Deepfakes are convincing. Entire personas can be spun up in minutes. Fraud doesn’t look sloppy anymore—it looks professional. And in many cases, it looks exactly like the people and platforms we already rely on. That’s the backdrop for my latest episode of the TechSpective Podcast, where I sat down with Oscar Rodriguez, who leads product efforts around trust at LinkedIn. The conversation quickly moved past features and announcements and into a much bigger question: how do we decide who to trust online when it’s getting harder to tell what’s real? LinkedIn has become my primary social platform over the past few years—partly by default, partly by design. As other platforms drifted further into chaos, LinkedIn positioned itself as the place where professional identity still mattered. But even there, the ground is shifting. The platform is more social than it used to be. The conversations are broader. And the risks are higher. In this episode, we dig into that evolution—not just how LinkedIn has changed, but why it’s changing and what that means for the people using it every day. We talk about professionalism as a concept, how it’s expanded beyond résumés and job postings, and why trying to rigidly police what “belongs” on a professional platform misses the point. At the same time, we don’t ignore the downside of that openness. One of the recurring themes in our conversation is signal versus noise. When you’re interacting with people you don’t know—often several degrees removed from your own network—what clues do you rely on to decide whether someone is legitimate? Mutual connections? Profile history? Gut instinct? Verification badges? Those signals matter more than ever, and not just on LinkedIn. As Oscar explains, trust has become a portable problem. We’re constantly being asked to prove who we are, where we work, or whether we belong—often across dozens of platforms that don’t talk to each other. That friction creates opportunity for abuse, but it also forces a conversation about how trust should work at internet scale. We also get into how AI is accelerating the arms race. The same tools that make it easier to create content and connect at scale also make it easier to deceive. Fraudsters don’t need to sound unprofessional anymore. Bots don’t look like bots. And “doing your own research” is a lot harder when expertise itself can be convincingly faked. Rather than offering simple answers, this episode focuses on the trade-offs. How much friction is acceptable in the name of safety? What does verification actually prove—and what doesn’t it prove? Should trust be assessed once, or continuously? And who ultimately bears responsibility when things go wrong: the platform, the user, or both? Listen to or watch the full episode of the TechSpective Podcast with Oscar Rodriguez to hear the full conversation.
play-circle icon
49 MIN
Why Identity Is the Key to AI-Driven Defense
JAN 30, 2026
Why Identity Is the Key to AI-Driven Defense
If you’ve been following trends in cybersecurity and enterprise tech, you already know that AI has become more than a buzzword—it’s a foundational shift. What may surprise you, though, is just how central identity has become in that evolution. In the latest episode of the TechSpective Podcast, I had the chance to speak with Naresh Persaud, Principal at Deloitte, who has spent more than two decades working in identity and cybersecurity. Today, he leads Deloitte’s Cyber AI Blueprint initiative—an effort aimed at reimagining cybersecurity from the ground up using AI. Our conversation explores why identity—something many people still think of as basic authentication—is now arguably the most critical pillar of AI-enabled cybersecurity. We dig into how identity data can enhance threat detection, simplify operations, and serve as the connective tissue across traditionally siloed cyber disciplines. And while we’ve all heard about identity’s role in credential theft and privilege abuse, Naresh takes it further—explaining how identity intersects with the very architecture of agentic AI systems. Spoiler: It’s not really about humans. The world of non-human identities—workloads, bots, agentic systems—has grown exponentially. That shift creates enormous opportunity but also opens up a wide new attack surface that most organizations aren’t yet equipped to secure. One of the key themes in this episode is context. Naresh emphasizes that identity provides context in a way no other signal can. Behavioral anomalies, access patterns, and workload telemetry are far more meaningful when filtered through the lens of identity. That’s especially important when adversaries increasingly rely on valid credentials to carry out attacks. In a world where everything looks like an insider threat, context is king. We also talk about where traditional security approaches fall short—and how cognitive cybersecurity changes the game. From simplifying the security stack to enabling faster, smarter decisions, AI (when paired with identity) is already showing promise in SOC operations and incident response. If that sounds a bit abstract, don’t worry—Naresh brings clarity with real-world examples and tangible insights. He connects the dots between AI, identity, and cyber maturity in a way that’s refreshingly grounded. Whether you’re a CISO, an identity architect, or just someone trying to stay ahead of the curve, there’s something in this conversation for you. One thing’s clear: AI is forcing us to rethink cybersecurity assumptions we’ve held for decades. And identity is no longer a sidekick in that story—it’s a strategic anchor. Check out the full episode wherever you get your podcasts—or watch the video version on YouTube. You’ll walk away with a deeper understanding of why identity matters more than ever—and how to position your organization for what comes next.
play-circle icon
53 MIN
Zero Trust, Real Talk: A Conversation with Dr. Chase Cunningham
JAN 21, 2026
Zero Trust, Real Talk: A Conversation with Dr. Chase Cunningham
How do you know your cybersecurity investments are actually making you safer? That’s the question at the heart of the latest TechSpective Podcast episode, where Dr. Chase Cunningham—better known to many as “Dr. Zero Trust”—joins me for an unfiltered, candid conversation about the state of modern cybersecurity. And no, this isn’t a puff piece on policy frameworks or the latest silver bullet tool. If you’ve read Chase’s recent LinkedIn post “Misaligned Zero Trust Spend = 1999 Firewall FOMO, But Worse,” you already know where this is going: straight into the hard truths about how organizations are still getting Zero Trust fundamentally wrong. In his post, Chase makes a blunt observation that became the foundation for our discussion: too many companies treat Zero Trust like a shopping list—buying products instead of outcomes. “If your ‘Zero Trust’ line items don’t move incident frequency, blast radius, or time to contain, you’re not buying security—you’re buying feelings.” That line stood out to me and was part of why I reached out to invite Chase to join me on the podcast. No Silver Bullets, Just Smarter Questions This isn’t an episode full of buzzwords or vendor shout-outs. It’s a reminder that there’s no shortcut around the work. Whether we’re talking about identity-anchored access control, microsegmentation, or reducing dwell time through automation, Chase repeatedly returns to a central theme: strategy over spectacle. He compares some security spending habits to crash diets and “cyber fat pills”—quick fixes that sound great in a pitch deck but collapse under scrutiny. Just like with fitness, real security gains come from consistency, not gimmicks. We also explore the often-overlooked relationship between breach economics and stock price behavior—another area where Chase has done deep research. The myth that a breach will destroy a brand? It’s more complicated than that. Sometimes (pro tip: most of the time) the dip is a buying opportunity, not a death sentence. Why You Should Listen If you’re a CISO, security architect, board member—or just someone trying to make sense of your security stack—this conversation will challenge your assumptions in all the right ways. It’s part therapy session, part strategy clinic, and entirely grounded in real-world experience. Check out the full episode:
play-circle icon
38 MIN
Algorithms, Thought Leadership, and the Future of Digital Influence
DEC 31, 2025
Algorithms, Thought Leadership, and the Future of Digital Influence
It’s getting harder to have a “normal” conversation about content, social media, or visibility anymore—mostly because the rules keep changing while you're still mid-sentence. Just a few years ago, you could create a blog post, optimize it for SEO, promote it on Twitter (back when it was still Twitter and not a dumpster fire of right-wing conspiracy lunacy rebranded as X), and expect a decent number of eyeballs to land on it. That’s not the game anymore. Now we’re living in a world of algorithmic gatekeeping, AI-generated content slop, and platforms that are slowly morphing into echo chambers of their own making. And as someone who spends a lot of time thinking, writing, and talking about tech, marketing, and cybersecurity, I wanted to have an actual conversation about what this means—beyond the usual recycled talking points. So, I invited Evan Kirstel onto the TechSpective Podcast to dig in. If you’re not familiar with Evan, you should be. He’s one of the more influential voices in B2B tech media—part content creator, part live streamer, part analyst, part TV host, depending on the day. He’s also been doing this for a while, and more importantly, doing it well. That makes him a great sounding board for the increasingly murky topic of digital thought leadership. One of the first things we talked about was the rise of formulaic, AI-generated content. You know the kind—it reads like it was built from a checklist of “engagement best practices,” and while it may technically be “on brand,” it’s rarely interesting. The irony, of course, is that the platforms boosting this kind of content are simultaneously rewarding quantity over quality, while drowning users in sameness. From there, we explored how visibility really works in 2025. Hint: it’s no longer about who you know—it’s about which large language model knows you. If you’re not showing up in ChatGPT summaries or Google’s new generative answers, you’re basically invisible to a big chunk of your potential audience. Which raises the question: how do you actually earn mindshare in a world where traditional SEO has been replaced by AI synthesis? We didn’t land on a one-size-fits-all answer—but we did agree on a few things. First, content that sounds like content for content’s sake? It’s dead. Thought leadership that merely echoes what 20 other people are already saying? Also dead. What works now is originality, consistency, and credibility—backed by actual lived experience. Another key theme we unpacked: platforms. Everyone likes to say “meet your audience where they are,” but it’s harder than it sounds when the audience is splintered across LinkedIn, Reddit, YouTube, TikTok, and a dozen other niche platforms—each with its own expectations and formats. Evan shared how he tailors his content for each platform without diluting the message, and why companies that try to be “cool” without context usually fall flat. I’ll also say this—this episode reminded me that high-quality conversations are still one of the most underutilized forms of content out there. When it’s not scripted or polished within an inch of its life, a good conversation can cut through the noise and resonate on a level most polished op-eds or templated videos never will. So if you’re feeling stuck, wondering why your content isn’t landing like it used to, or trying to figure out how to show up where it matters—this episode is worth your time. Check out my conversation with Evan Kirstel on the TechSpective Podcast. And yes, we get into Gary Vaynerchuk, TikTok, zero-click search, and why it might be time to completely rethink your content strategy.
play-circle icon
46 MIN
Shadow AI, Cybersecurity, and the Evolving Threat Landscape
DEC 28, 2025
Shadow AI, Cybersecurity, and the Evolving Threat Landscape
The cybersecurity landscape never sits still—and neither do the conversations I aim to have on the TechSpective Podcast. In the latest episode, I sit down with Etay Maor, Chief Security Strategist at Cato Networks and a founding member of Cato CTRL, the company’s cyber threats research lab. Etay brings a rare mix of technical depth and practical perspective—something increasingly necessary as we navigate the murky waters of modern cyber threats. This time, the conversation centers on the rise of Shadow AI—a topic gaining urgency but still underappreciated in many organizations. If Shadow IT was the quiet rule-breaker of the past decade, Shadow AI is its unpredictable, algorithmically supercharged cousin. It’s showing up in boardrooms, workflows, and marketing departments—often without security teams even knowing it’s there. Here’s the thing: banning AI tools or blocking access doesn’t work. People find a way around it. We’ve seen this play out with cloud storage, collaboration tools, and other “unsanctioned” technologies. The same logic applies here. Etay and I explore why organizations need to move beyond a binary yes/no mindset and instead think in terms of guardrails, visibility, and enablement. We also get into the tension between innovation and risk—how fear-based decision-making can put companies at a disadvantage, and why the bigger threat might be not using AI at all. That may sound counterintuitive coming from two people steeped in cybersecurity, but context matters. The risk of falling behind could be greater than the risk of exposure—if companies don’t take a strategic approach. Naturally, the conversation expands into how threat actors are adapting AI for offensive purposes—crafting more convincing phishing emails, automating reconnaissance, and even gaming defensive AI tools. Etay shares sharp insights into how attackers use our own tools against us and what that means for the future of cybersecurity. There’s also a philosophical thread woven throughout—questions about whether AI can truly be “original,” how human creativity intersects with machine learning, and what kind of ethical or regulatory frameworks might be needed (if any) to keep things from going off the rails. Etay brings both technical fluency and historical perspective to the discussion, making it a conversation that’s as grounded as it is thought-provoking. This episode doesn’t veer into fear-mongering or hype. It stays real—examining where we are, where we’re headed, and how to make better decisions as the ground keeps shifting. Whether you’re in security, tech leadership, policy, or just curious about how AI is reshaping the digital battleground, this one’s worth your time. Tune in to the latest TechSpective Podcast—now streaming on all major platforms. Share your thoughts in the comments below.
play-circle icon
58 MIN