TechSpective Podcast

Tony Bradley

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.

  1. 5D AGO

    Who Do You Trust Online—And Why?

    Trust on the internet used to be a fairly simple calculation. You looked for familiar names, recognizable brands, maybe a blue checkmark, and you made a judgment call. Today, that math often fails. AI has changed the game. Deepfakes are convincing. Entire personas can be spun up in minutes. Fraud doesn’t look sloppy anymore—it looks professional. And in many cases, it looks exactly like the people and platforms we already rely on. That’s the backdrop for my latest episode of the TechSpective Podcast, where I sat down with Oscar Rodriguez, who leads product efforts around trust at LinkedIn. The conversation quickly moved past features and announcements and into a much bigger question: how do we decide who to trust online when it’s getting harder to tell what’s real? LinkedIn has become my primary social platform over the past few years—partly by default, partly by design. As other platforms drifted further into chaos, LinkedIn positioned itself as the place where professional identity still mattered. But even there, the ground is shifting. The platform is more social than it used to be. The conversations are broader. And the risks are higher. In this episode, we dig into that evolution—not just how LinkedIn has changed, but why it’s changing and what that means for the people using it every day. We talk about professionalism as a concept, how it’s expanded beyond résumés and job postings, and why trying to rigidly police what “belongs” on a professional platform misses the point. At the same time, we don’t ignore the downside of that openness. One of the recurring themes in our conversation is signal versus noise. When you’re interacting with people you don’t know—often several degrees removed from your own network—what clues do you rely on to decide whether someone is legitimate? Mutual connections? Profile history? Gut instinct? Verification badges? Those signals matter more than ever, and not just on LinkedIn. As Oscar explains, trust has become a portable problem. We’re constantly being asked to prove who we are, where we work, or whether we belong—often across dozens of platforms that don’t talk to each other. That friction creates opportunity for abuse, but it also forces a conversation about how trust should work at internet scale. We also get into how AI is accelerating the arms race. The same tools that make it easier to create content and connect at scale also make it easier to deceive. Fraudsters don’t need to sound unprofessional anymore. Bots don’t look like bots. And “doing your own research” is a lot harder when expertise itself can be convincingly faked. Rather than offering simple answers, this episode focuses on the trade-offs. How much friction is acceptable in the name of safety? What does verification actually prove—and what doesn’t it prove? Should trust be assessed once, or continuously? And who ultimately bears responsibility when things go wrong: the platform, the user, or both? Listen to or watch the full episode of the TechSpective Podcast with Oscar Rodriguez to hear the full conversation.

    50 min
  2. JAN 30

    Why Identity Is the Key to AI-Driven Defense

    If you’ve been following trends in cybersecurity and enterprise tech, you already know that AI has become more than a buzzword—it’s a foundational shift. What may surprise you, though, is just how central identity has become in that evolution. In the latest episode of the TechSpective Podcast, I had the chance to speak with Naresh Persaud, Principal at Deloitte, who has spent more than two decades working in identity and cybersecurity. Today, he leads Deloitte’s Cyber AI Blueprint initiative—an effort aimed at reimagining cybersecurity from the ground up using AI. Our conversation explores why identity—something many people still think of as basic authentication—is now arguably the most critical pillar of AI-enabled cybersecurity. We dig into how identity data can enhance threat detection, simplify operations, and serve as the connective tissue across traditionally siloed cyber disciplines. And while we’ve all heard about identity’s role in credential theft and privilege abuse, Naresh takes it further—explaining how identity intersects with the very architecture of agentic AI systems. Spoiler: It’s not really about humans. The world of non-human identities—workloads, bots, agentic systems—has grown exponentially. That shift creates enormous opportunity but also opens up a wide new attack surface that most organizations aren’t yet equipped to secure. One of the key themes in this episode is context. Naresh emphasizes that identity provides context in a way no other signal can. Behavioral anomalies, access patterns, and workload telemetry are far more meaningful when filtered through the lens of identity. That’s especially important when adversaries increasingly rely on valid credentials to carry out attacks. In a world where everything looks like an insider threat, context is king. We also talk about where traditional security approaches fall short—and how cognitive cybersecurity changes the game. From simplifying the security stack to enabling faster, smarter decisions, AI (when paired with identity) is already showing promise in SOC operations and incident response. If that sounds a bit abstract, don’t worry—Naresh brings clarity with real-world examples and tangible insights. He connects the dots between AI, identity, and cyber maturity in a way that’s refreshingly grounded. Whether you’re a CISO, an identity architect, or just someone trying to stay ahead of the curve, there’s something in this conversation for you. One thing’s clear: AI is forcing us to rethink cybersecurity assumptions we’ve held for decades. And identity is no longer a sidekick in that story—it’s a strategic anchor. Check out the full episode wherever you get your podcasts—or watch the video version on YouTube. You’ll walk away with a deeper understanding of why identity matters more than ever—and how to position your organization for what comes next.

    54 min
  3. JAN 21

    Zero Trust, Real Talk: A Conversation with Dr. Chase Cunningham

    How do you know your cybersecurity investments are actually making you safer? That’s the question at the heart of the latest TechSpective Podcast episode, where Dr. Chase Cunningham—better known to many as “Dr. Zero Trust”—joins me for an unfiltered, candid conversation about the state of modern cybersecurity. And no, this isn’t a puff piece on policy frameworks or the latest silver bullet tool. If you’ve read Chase’s recent LinkedIn post “Misaligned Zero Trust Spend = 1999 Firewall FOMO, But Worse,” you already know where this is going: straight into the hard truths about how organizations are still getting Zero Trust fundamentally wrong. In his post, Chase makes a blunt observation that became the foundation for our discussion: too many companies treat Zero Trust like a shopping list—buying products instead of outcomes. “If your ‘Zero Trust’ line items don’t move incident frequency, blast radius, or time to contain, you’re not buying security—you’re buying feelings.” That line stood out to me and was part of why I reached out to invite Chase to join me on the podcast. No Silver Bullets, Just Smarter Questions This isn’t an episode full of buzzwords or vendor shout-outs. It’s a reminder that there’s no shortcut around the work. Whether we’re talking about identity-anchored access control, microsegmentation, or reducing dwell time through automation, Chase repeatedly returns to a central theme: strategy over spectacle. He compares some security spending habits to crash diets and “cyber fat pills”—quick fixes that sound great in a pitch deck but collapse under scrutiny. Just like with fitness, real security gains come from consistency, not gimmicks. We also explore the often-overlooked relationship between breach economics and stock price behavior—another area where Chase has done deep research. The myth that a breach will destroy a brand? It’s more complicated than that. Sometimes (pro tip: most of the time) the dip is a buying opportunity, not a death sentence. Why You Should Listen If you’re a CISO, security architect, board member—or just someone trying to make sense of your security stack—this conversation will challenge your assumptions in all the right ways. It’s part therapy session, part strategy clinic, and entirely grounded in real-world experience. Check out the full episode:

    38 min
  4. 12/31/2025

    Algorithms, Thought Leadership, and the Future of Digital Influence

    It’s getting harder to have a “normal” conversation about content, social media, or visibility anymore—mostly because the rules keep changing while you're still mid-sentence. Just a few years ago, you could create a blog post, optimize it for SEO, promote it on Twitter (back when it was still Twitter and not a dumpster fire of right-wing conspiracy lunacy rebranded as X), and expect a decent number of eyeballs to land on it. That’s not the game anymore. Now we’re living in a world of algorithmic gatekeeping, AI-generated content slop, and platforms that are slowly morphing into echo chambers of their own making. And as someone who spends a lot of time thinking, writing, and talking about tech, marketing, and cybersecurity, I wanted to have an actual conversation about what this means—beyond the usual recycled talking points. So, I invited Evan Kirstel onto the TechSpective Podcast to dig in. If you’re not familiar with Evan, you should be. He’s one of the more influential voices in B2B tech media—part content creator, part live streamer, part analyst, part TV host, depending on the day. He’s also been doing this for a while, and more importantly, doing it well. That makes him a great sounding board for the increasingly murky topic of digital thought leadership. One of the first things we talked about was the rise of formulaic, AI-generated content. You know the kind—it reads like it was built from a checklist of “engagement best practices,” and while it may technically be “on brand,” it’s rarely interesting. The irony, of course, is that the platforms boosting this kind of content are simultaneously rewarding quantity over quality, while drowning users in sameness. From there, we explored how visibility really works in 2025. Hint: it’s no longer about who you know—it’s about which large language model knows you. If you’re not showing up in ChatGPT summaries or Google’s new generative answers, you’re basically invisible to a big chunk of your potential audience. Which raises the question: how do you actually earn mindshare in a world where traditional SEO has been replaced by AI synthesis? We didn’t land on a one-size-fits-all answer—but we did agree on a few things. First, content that sounds like content for content’s sake? It’s dead. Thought leadership that merely echoes what 20 other people are already saying? Also dead. What works now is originality, consistency, and credibility—backed by actual lived experience. Another key theme we unpacked: platforms. Everyone likes to say “meet your audience where they are,” but it’s harder than it sounds when the audience is splintered across LinkedIn, Reddit, YouTube, TikTok, and a dozen other niche platforms—each with its own expectations and formats. Evan shared how he tailors his content for each platform without diluting the message, and why companies that try to be “cool” without context usually fall flat. I’ll also say this—this episode reminded me that high-quality conversations are still one of the most underutilized forms of content out there. When it’s not scripted or polished within an inch of its life, a good conversation can cut through the noise and resonate on a level most polished op-eds or templated videos never will. So if you’re feeling stuck, wondering why your content isn’t landing like it used to, or trying to figure out how to show up where it matters—this episode is worth your time. Check out my conversation with Evan Kirstel on the TechSpective Podcast. And yes, we get into Gary Vaynerchuk, TikTok, zero-click search, and why it might be time to completely rethink your content strategy.

    47 min
  5. 12/28/2025

    Shadow AI, Cybersecurity, and the Evolving Threat Landscape

    The cybersecurity landscape never sits still—and neither do the conversations I aim to have on the TechSpective Podcast. In the latest episode, I sit down with Etay Maor, Chief Security Strategist at Cato Networks and a founding member of Cato CTRL, the company’s cyber threats research lab. Etay brings a rare mix of technical depth and practical perspective—something increasingly necessary as we navigate the murky waters of modern cyber threats. This time, the conversation centers on the rise of Shadow AI—a topic gaining urgency but still underappreciated in many organizations. If Shadow IT was the quiet rule-breaker of the past decade, Shadow AI is its unpredictable, algorithmically supercharged cousin. It’s showing up in boardrooms, workflows, and marketing departments—often without security teams even knowing it’s there. Here’s the thing: banning AI tools or blocking access doesn’t work. People find a way around it. We’ve seen this play out with cloud storage, collaboration tools, and other “unsanctioned” technologies. The same logic applies here. Etay and I explore why organizations need to move beyond a binary yes/no mindset and instead think in terms of guardrails, visibility, and enablement. We also get into the tension between innovation and risk—how fear-based decision-making can put companies at a disadvantage, and why the bigger threat might be not using AI at all. That may sound counterintuitive coming from two people steeped in cybersecurity, but context matters. The risk of falling behind could be greater than the risk of exposure—if companies don’t take a strategic approach. Naturally, the conversation expands into how threat actors are adapting AI for offensive purposes—crafting more convincing phishing emails, automating reconnaissance, and even gaming defensive AI tools. Etay shares sharp insights into how attackers use our own tools against us and what that means for the future of cybersecurity. There’s also a philosophical thread woven throughout—questions about whether AI can truly be “original,” how human creativity intersects with machine learning, and what kind of ethical or regulatory frameworks might be needed (if any) to keep things from going off the rails. Etay brings both technical fluency and historical perspective to the discussion, making it a conversation that’s as grounded as it is thought-provoking. This episode doesn’t veer into fear-mongering or hype. It stays real—examining where we are, where we’re headed, and how to make better decisions as the ground keeps shifting. Whether you’re in security, tech leadership, policy, or just curious about how AI is reshaping the digital battleground, this one’s worth your time. Tune in to the latest TechSpective Podcast—now streaming on all major platforms. Share your thoughts in the comments below.

    58 min
  6. 12/23/2025

    Agentic AI and the Art of Asking Better Questions

    I’ve had a lot of conversations about AI over the past couple years—some insightful, some overhyped, and a few that left me questioning whether we’re even talking about the same technology. But every now and then, I get the opportunity to sit down with someone who not only understands the technology but also sees its broader implications with clarity and honesty. This episode of the TechSpective Podcast is one of those moments. Jeetu Patel, President and Chief Product Officer at Cisco, joins me for an unscripted, unfiltered conversation that covers more ground than I could have outlined in a set of pre-written questions. Actually, I did draft a set of pre-written questions. We just didn't follow or use them at all. Jeetu and I have known each other for a while, and this episode reflects the kind of conversation you only get with someone who’s deeply immersed in both the strategic and human sides of tech. It’s thoughtful. It’s philosophical. And it doesn’t pull punches. At the center of our discussion is the concept of “agentic AI”—a term that’s being used more frequently, sometimes without much clarity. We unpack what it actually means, what it can realistically do, and how it differs from the wave of chatbots and content generators that came before it. More importantly, we talk about how these AI agents might change not just the tasks we automate, but how we think about work itself. Of course, with any conversation about AI and the future of work comes the inevitable tension: what gets lost, what gets reimagined, and what still requires distinctly human judgment. Jeetu brings a nuanced take to this, rooted in his experience leading product innovation at one of the world’s largest tech companies. It’s not a conversation filled with predictions so much as it is a reframing of the questions we should be asking. What stood out to me is how quickly we normalize the extraordinary. A technology that felt magical two years ago is now embedded in our daily workflows. That speed of adoption changes the stakes. It means we need to be more deliberate—not just about what AI can do, but what we want it to do, and what we risk offloading too quickly. We also touch on the philosophical implications. If AI agents really can handle more of the cognitive heavy lifting, what’s our role in the loop? Do we become editors? Overseers? Explorers of new frontiers? And how do we prepare for jobs that don’t exist yet, using tools that are evolving faster than we can document them? I think this episode will resonate with anyone trying to navigate this moment—whether you’re in product development, policy, marketing, or just someone who likes to think a few moves ahead. It’s about more than AI. It’s about how we adapt, how we define value, and what we choose to hold onto as the landscape shifts. Give it a listen. And as always, I’d love to hear your thoughts.

    53 min
  7. 12/19/2025

    Building Security for a World That’s Already Changed

    There’s a question I’ve been sitting with lately: Are we prepared for what AI is about to expose in our organizations—not just technically, but operationally? In this episode of the TechSpective Podcast, I sit down with Kavitha Mariappan, Rubrik’s Chief Transformation Officer, to unpack some of the less flashy but arguably more urgent questions about enterprise security, AI readiness, and business continuity. If your organization is still treating identity as a login issue or AI as a future-state conversation, you might be missing the bigger picture. Kavitha doesn’t speak in clichés. She’s been in the trenches—engineering, scaling go-to-market teams, and now helping steer one of the fastest-evolving players in the data security space. Her perspective is shaped by decades of experience, but her focus is very much on the now: how to operationalize resilience at a time when every system, process, and even person has become a potential attack vector. One of the threads we pull on is the idea that resilience isn’t a fallback plan anymore—it’s the front line. And identity? That’s not just a security issue. It’s a dependency. If you can’t log in, you can’t recover. You can’t operate. You can’t pivot. The conversation touches on what it really means to build for resilience in a landscape where downtime isn’t just costly—it’s existential. We also explore what I’ll loosely call “AI exposure therapy”—not in the sense of experimenting with new models or shiny tools, but in understanding how AI is forcing companies to confront their structural weaknesses. What used to be considered internal inefficiencies are now potential vectors of attack. Technical debt isn’t just a performance issue—it’s a risk multiplier. Kavitha brings data to the table too—sharing insight from Rubrik Zero Labs on the alarming surge in identity-based attacks and why the majority of companies are still playing catch-up when it comes to securing what they can’t always see. It’s a wake-up call, but not a hopeless one. What made this conversation stand out to me wasn’t just the subject matter, but the way Kavitha frames the questions we should be asking: How do we architect for a world that’s already in flux? How do we define AI transformation when most businesses are still digesting digital transformation? And perhaps most critically, what needs to change inside the organization before the tech can even do its job? I won’t give away the full arc of the discussion, but here’s my pitch: If you’re leading, advising, or building for a company that handles sensitive data (hint: that’s all of us), this episode will challenge you to think differently about where resilience really begins—and what it’s going to take to build it into the DNA of your org. Listen to or watch the full episode here:

    53 min
  8. 12/18/2025

    Cybersecurity’s Quiet Revolution: What We’re Missing While Chasing the Hype

    There’s something happening in cybersecurity right now that’s both exciting and a little disorienting. As generative and agentic AI take over headlines, conference keynotes, and investor decks, it’s easy to assume we’re on the verge of some great leap forward. The reality is more complicated—and more interesting. In the latest episode of the TechSpective Podcast, I had the chance to sit down with Sachin Jade, Chief Product Officer at Cyware, for a conversation that cuts through the buzzwords. We cover a lot of ground—from AI’s place in the SOC to the underrated power of relevance in threat intelligence—but what stuck with me most was this: the most transformative work happening in security right now doesn’t look like a revolution. It looks like simplification. Not simplification in the marketing sense—fewer dashboards, “single pane of glass,” etc.—but simplification where it actually matters: filtering noise, streamlining analysis, helping human analysts do their jobs better and faster. There’s a growing recognition among smart security leaders that “flashy” features might demo well, but if they don’t reduce burnout, improve signal-to-noise, or give analysts time back in their day, they’re missing the point. We’re at a moment where AI can—and should—do more than just surface alerts. The goal isn’t to impress anyone with a cool interface or to simulate a brilliant security expert. The goal is to embed intelligence into the places that grind analysts down: filtering irrelevant threat intel, connecting disparate data points, recommending next steps based on context. Mundane, unsexy tasks—yes. But transformative when done well. Sachin offered a useful framework for thinking about agentic AI that goes beyond the surface definitions most people are using. We talk about where true decision-making autonomy begins, how it fits into layered workflows, and what it really looks like to “mimic” human reasoning in a SOC environment. Spoiler: it’s not about replacing people. It’s about enabling them. Another theme that emerged: relevancy. Not in a vague, feel-good way, but in the deeply practical sense of “does this matter to me, my company, my infrastructure, right now?” For all the AI talk, too many tools still struggle to answer that question clearly. Cyware’s approach, which Sachin outlines in the episode, puts a premium on reducing noise and increasing clarity. There’s no magic wand—but there is a very intentional shift toward making intelligence actionable, digestible, and contextual. That matters more than whatever buzzword is trending on social media this week. We also explore the idea of functional decomposition in AI—a concept that mirrors how most human security teams are structured. Instead of building a monolithic super-intelligent assistant, Cyware has developed a multi-agent model where each AI agent is focused on a specific task, like malware triage or incident correlation. It’s less hive-mind, more specialized team—just like the best human teams. That architectural choice has significant implications for accuracy, explainability, and trust. The full conversation dives deeper into how these ideas show up in real-world security operations, what CISOs are actually looking for in AI-driven tools, and why strategic use of “boring” automation may be the real game-changer for the next decade. If you’re someone who’s tired of the AI hype but still deeply curious about where it’s actually moving the needle, I think you’ll find this episode worth your time. We don’t spend 45 minutes tossing around acronyms—we get into how AI can help analysts cut through the clutter, why relevancy is the next frontier, and what it means to design intelligence that works the way humans actually think. Listen to or watch the full episode here:

    50 min

About

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.