TechSpective Podcast

Tony Bradley

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.

  1. 14 HR AGO

    Most Companies Are Still Just Playing With AI

    Everybody's got an AI strategy. Every platform claims to be AI-powered. Every vendor deck has a slide about how their product uses machine learning to deliver transformative outcomes. Most of it is still theater. I had a conversation with David DeSanto, CEO at Anaconda, recently for the TechSpective Podcast, and what struck me most was how honest he was about where enterprise AI actually stands. Not where vendors want it to be. Not where the headlines say it is. Where it actually is. A lot of organizations have run pilots. Some have solid proof-of-concept projects. A handful have built internal tools that genuinely save teams time. But very few have moved AI into real production across the business. There's a big difference between "we're experimenting" and "this is how we work now." That gap is where most companies are stuck---and it's not because the technology doesn't work. The demo almost always looks good. A model produces useful output. A prototype saves someone a few hours. The problem shows up when you try to scale that across an enterprise environment. Suddenly you're dealing with data governance questions, security concerns, reliability issues, and a fundamental trust problem: can we actually rely on what this thing produces? Those issues don't show up in the demo. Open source plays an interesting role here. It's always been central to the data science world, and that hasn't changed. Developers and data scientists are still experimenting constantly---new models, new frameworks, new workflows. Open ecosystems make that possible. But they also create real headaches for organizations trying to manage dependencies, maintain security, and keep things consistent across teams. Innovation versus governance. That's the tension nobody has fully figured out yet. Something else worth noting: AI is changing what technical expertise actually means. Tasks that required specialized skills a few years ago can now be partially automated. That sounds like it should reduce the need for expertise---but it mostly just moves where that expertise matters. Technical teams spend less time writing code from scratch and more time framing problems, evaluating outputs, and validating results. Knowing how to ask the right question---or spot when an AI's answer is subtly wrong---can matter more than generating the answer in the first place. That's a real shift in how those jobs work, and most organizations are still figuring out how to adapt. Trust is the underlying issue running through all of this. Organizations can't treat AI like a magic box that produces correct answers. They need to understand how models work, how their data is being used, and how outputs are generated. Without that visibility, it's hard to rely on AI for anything that actually matters. And the challenge isn't really technical. The technology works well enough. What's hard is building the infrastructure, governance, and culture around it---getting security teams, data scientists, developers, and business leaders to actually work together instead of operating in separate lanes. That collaboration doesn't happen naturally. It has to be built deliberately. AI also tends to change the process, not just speed it up. Teams aren't just doing the same work faster---they're working differently, exploring problems differently, testing ideas differently. Machines are becoming collaborators in that process rather than just tools. Adapting to that takes time. The organizations that figure it out won't be the ones with the most advanced AI technology. They'll be the ones that put in the unglamorous work---governance frameworks, cross-team alignment, careful validation of what the AI actually produces. That's less exciting than the vendor pitch. But it's closer to what real progress looks like.

    56 min
  2. 3 DAYS AGO

    Rethinking Cybersecurity For A World Of AI And Machine Identities

    I spend a lot of time talking with people in cybersecurity. Founders, analysts, CISOs, researchers. One thing that comes up again and again is that the problem space keeps getting bigger. Not just more threats—more complexity. That’s really the thread running through my recent TechSpective Podcast conversation with Clarence Chio, co-founder and CEO of Coverbase. Security used to be easier to conceptualize. Not easier to solve, necessarily—but easier to frame. You had networks, endpoints, users, and a perimeter. Protect the edge. Monitor what’s inside. Respond when something goes wrong. That model doesn’t really exist anymore. Today, most organizations operate in environments that span multiple clouds, dozens or hundreds of SaaS applications, APIs everywhere, and automated workflows connecting everything together. Identities are everywhere too—human users, service accounts, machine identities, AI agents. The number of things acting inside a system has exploded. And every one of those things represents potential risk. Clarence and I spent a good part of the conversation talking about how that shift changes the nature of cybersecurity. It’s less about building walls and more about understanding behavior. Who is doing what? What systems are interacting? What’s normal, and what isn’t? That sounds simple, but it’s actually one of the hardest problems in security right now. The environment changes constantly. New tools get deployed. Developers spin up services. AI models start interacting with data pipelines and APIs. Keeping track of it all is a challenge. Then there’s the AI angle. AI is showing up everywhere right now—on both sides of the security equation. Security vendors are embedding AI into their platforms to analyze data faster and automate responses. At the same time, attackers are experimenting with AI to generate malware, improve phishing, and automate reconnaissance. But one thing Clarence pointed out—and I agree—is that AI doesn’t magically solve security problems. If anything, it tends to amplify whatever processes already exist. If your visibility is poor, AI doesn’t fix that. If your governance is weak, automation can actually make the problem worse. Technology alone rarely fixes systemic problems. Another part of the discussion that stood out to me was the human side of security. It’s easy to focus on tools because that’s what vendors sell. But effective security programs depend heavily on the people running them. Security professionals need to understand the technology, obviously. But they also need context and judgment. They need to know how systems interact and how changes ripple across an environment. And maybe most important, they need the freedom to question assumptions. That’s something Clarence emphasized during the conversation. In fast-moving technology environments, curiosity and critical thinking matter. Security teams can’t just follow checklists. They have to understand how systems behave and be able to spot when something doesn’t look right. Which brings us back to complexity. The attack surface keeps growing. Infrastructure is more distributed. AI and automation are adding new layers of capability—and new layers of risk. There’s no single tool that solves that. What organizations can do is build better visibility, invest in people, and develop security programs that are designed to adapt rather than assume the environment will stay stable. That’s easier said than done, but it’s the direction things are moving. If you’re working in security—or just trying to make sense of how AI and modern infrastructure are reshaping risk—I think you’ll find the conversation interesting. Clarence brings a thoughtful perspective, and we cover a lot of ground without getting lost in buzzwords. You can listen to the full episode of the TechSpective Podcast or watch the discussion on YouTube.

    48 min
  3. 19 FEB

    Who Do You Trust Online—And Why?

    Trust on the internet used to be a fairly simple calculation. You looked for familiar names, recognizable brands, maybe a blue checkmark, and you made a judgment call. Today, that math often fails. AI has changed the game. Deepfakes are convincing. Entire personas can be spun up in minutes. Fraud doesn’t look sloppy anymore—it looks professional. And in many cases, it looks exactly like the people and platforms we already rely on. That’s the backdrop for my latest episode of the TechSpective Podcast, where I sat down with Oscar Rodriguez, who leads product efforts around trust at LinkedIn. The conversation quickly moved past features and announcements and into a much bigger question: how do we decide who to trust online when it’s getting harder to tell what’s real? LinkedIn has become my primary social platform over the past few years—partly by default, partly by design. As other platforms drifted further into chaos, LinkedIn positioned itself as the place where professional identity still mattered. But even there, the ground is shifting. The platform is more social than it used to be. The conversations are broader. And the risks are higher. In this episode, we dig into that evolution—not just how LinkedIn has changed, but why it’s changing and what that means for the people using it every day. We talk about professionalism as a concept, how it’s expanded beyond résumés and job postings, and why trying to rigidly police what “belongs” on a professional platform misses the point. At the same time, we don’t ignore the downside of that openness. One of the recurring themes in our conversation is signal versus noise. When you’re interacting with people you don’t know—often several degrees removed from your own network—what clues do you rely on to decide whether someone is legitimate? Mutual connections? Profile history? Gut instinct? Verification badges? Those signals matter more than ever, and not just on LinkedIn. As Oscar explains, trust has become a portable problem. We’re constantly being asked to prove who we are, where we work, or whether we belong—often across dozens of platforms that don’t talk to each other. That friction creates opportunity for abuse, but it also forces a conversation about how trust should work at internet scale. We also get into how AI is accelerating the arms race. The same tools that make it easier to create content and connect at scale also make it easier to deceive. Fraudsters don’t need to sound unprofessional anymore. Bots don’t look like bots. And “doing your own research” is a lot harder when expertise itself can be convincingly faked. Rather than offering simple answers, this episode focuses on the trade-offs. How much friction is acceptable in the name of safety? What does verification actually prove—and what doesn’t it prove? Should trust be assessed once, or continuously? And who ultimately bears responsibility when things go wrong: the platform, the user, or both? Listen to or watch the full episode of the TechSpective Podcast with Oscar Rodriguez to hear the full conversation.

    50 min
  4. 30 JAN

    Why Identity Is the Key to AI-Driven Defense

    If you’ve been following trends in cybersecurity and enterprise tech, you already know that AI has become more than a buzzword—it’s a foundational shift. What may surprise you, though, is just how central identity has become in that evolution. In the latest episode of the TechSpective Podcast, I had the chance to speak with Naresh Persaud, Principal at Deloitte, who has spent more than two decades working in identity and cybersecurity. Today, he leads Deloitte’s Cyber AI Blueprint initiative—an effort aimed at reimagining cybersecurity from the ground up using AI. Our conversation explores why identity—something many people still think of as basic authentication—is now arguably the most critical pillar of AI-enabled cybersecurity. We dig into how identity data can enhance threat detection, simplify operations, and serve as the connective tissue across traditionally siloed cyber disciplines. And while we’ve all heard about identity’s role in credential theft and privilege abuse, Naresh takes it further—explaining how identity intersects with the very architecture of agentic AI systems. Spoiler: It’s not really about humans. The world of non-human identities—workloads, bots, agentic systems—has grown exponentially. That shift creates enormous opportunity but also opens up a wide new attack surface that most organizations aren’t yet equipped to secure. One of the key themes in this episode is context. Naresh emphasizes that identity provides context in a way no other signal can. Behavioral anomalies, access patterns, and workload telemetry are far more meaningful when filtered through the lens of identity. That’s especially important when adversaries increasingly rely on valid credentials to carry out attacks. In a world where everything looks like an insider threat, context is king. We also talk about where traditional security approaches fall short—and how cognitive cybersecurity changes the game. From simplifying the security stack to enabling faster, smarter decisions, AI (when paired with identity) is already showing promise in SOC operations and incident response. If that sounds a bit abstract, don’t worry—Naresh brings clarity with real-world examples and tangible insights. He connects the dots between AI, identity, and cyber maturity in a way that’s refreshingly grounded. Whether you’re a CISO, an identity architect, or just someone trying to stay ahead of the curve, there’s something in this conversation for you. One thing’s clear: AI is forcing us to rethink cybersecurity assumptions we’ve held for decades. And identity is no longer a sidekick in that story—it’s a strategic anchor. Check out the full episode wherever you get your podcasts—or watch the video version on YouTube. You’ll walk away with a deeper understanding of why identity matters more than ever—and how to position your organization for what comes next.

    54 min
  5. 21 JAN

    Zero Trust, Real Talk: A Conversation with Dr. Chase Cunningham

    How do you know your cybersecurity investments are actually making you safer? That’s the question at the heart of the latest TechSpective Podcast episode, where Dr. Chase Cunningham—better known to many as “Dr. Zero Trust”—joins me for an unfiltered, candid conversation about the state of modern cybersecurity. And no, this isn’t a puff piece on policy frameworks or the latest silver bullet tool. If you’ve read Chase’s recent LinkedIn post “Misaligned Zero Trust Spend = 1999 Firewall FOMO, But Worse,” you already know where this is going: straight into the hard truths about how organizations are still getting Zero Trust fundamentally wrong. In his post, Chase makes a blunt observation that became the foundation for our discussion: too many companies treat Zero Trust like a shopping list—buying products instead of outcomes. “If your ‘Zero Trust’ line items don’t move incident frequency, blast radius, or time to contain, you’re not buying security—you’re buying feelings.” That line stood out to me and was part of why I reached out to invite Chase to join me on the podcast. No Silver Bullets, Just Smarter Questions This isn’t an episode full of buzzwords or vendor shout-outs. It’s a reminder that there’s no shortcut around the work. Whether we’re talking about identity-anchored access control, microsegmentation, or reducing dwell time through automation, Chase repeatedly returns to a central theme: strategy over spectacle. He compares some security spending habits to crash diets and “cyber fat pills”—quick fixes that sound great in a pitch deck but collapse under scrutiny. Just like with fitness, real security gains come from consistency, not gimmicks. We also explore the often-overlooked relationship between breach economics and stock price behavior—another area where Chase has done deep research. The myth that a breach will destroy a brand? It’s more complicated than that. Sometimes (pro tip: most of the time) the dip is a buying opportunity, not a death sentence. Why You Should Listen If you’re a CISO, security architect, board member—or just someone trying to make sense of your security stack—this conversation will challenge your assumptions in all the right ways. It’s part therapy session, part strategy clinic, and entirely grounded in real-world experience. Check out the full episode:

    38 min
  6. 31/12/2025

    Algorithms, Thought Leadership, and the Future of Digital Influence

    It’s getting harder to have a “normal” conversation about content, social media, or visibility anymore—mostly because the rules keep changing while you're still mid-sentence. Just a few years ago, you could create a blog post, optimize it for SEO, promote it on Twitter (back when it was still Twitter and not a dumpster fire of right-wing conspiracy lunacy rebranded as X), and expect a decent number of eyeballs to land on it. That’s not the game anymore. Now we’re living in a world of algorithmic gatekeeping, AI-generated content slop, and platforms that are slowly morphing into echo chambers of their own making. And as someone who spends a lot of time thinking, writing, and talking about tech, marketing, and cybersecurity, I wanted to have an actual conversation about what this means—beyond the usual recycled talking points. So, I invited Evan Kirstel onto the TechSpective Podcast to dig in. If you’re not familiar with Evan, you should be. He’s one of the more influential voices in B2B tech media—part content creator, part live streamer, part analyst, part TV host, depending on the day. He’s also been doing this for a while, and more importantly, doing it well. That makes him a great sounding board for the increasingly murky topic of digital thought leadership. One of the first things we talked about was the rise of formulaic, AI-generated content. You know the kind—it reads like it was built from a checklist of “engagement best practices,” and while it may technically be “on brand,” it’s rarely interesting. The irony, of course, is that the platforms boosting this kind of content are simultaneously rewarding quantity over quality, while drowning users in sameness. From there, we explored how visibility really works in 2025. Hint: it’s no longer about who you know—it’s about which large language model knows you. If you’re not showing up in ChatGPT summaries or Google’s new generative answers, you’re basically invisible to a big chunk of your potential audience. Which raises the question: how do you actually earn mindshare in a world where traditional SEO has been replaced by AI synthesis? We didn’t land on a one-size-fits-all answer—but we did agree on a few things. First, content that sounds like content for content’s sake? It’s dead. Thought leadership that merely echoes what 20 other people are already saying? Also dead. What works now is originality, consistency, and credibility—backed by actual lived experience. Another key theme we unpacked: platforms. Everyone likes to say “meet your audience where they are,” but it’s harder than it sounds when the audience is splintered across LinkedIn, Reddit, YouTube, TikTok, and a dozen other niche platforms—each with its own expectations and formats. Evan shared how he tailors his content for each platform without diluting the message, and why companies that try to be “cool” without context usually fall flat. I’ll also say this—this episode reminded me that high-quality conversations are still one of the most underutilized forms of content out there. When it’s not scripted or polished within an inch of its life, a good conversation can cut through the noise and resonate on a level most polished op-eds or templated videos never will. So if you’re feeling stuck, wondering why your content isn’t landing like it used to, or trying to figure out how to show up where it matters—this episode is worth your time. Check out my conversation with Evan Kirstel on the TechSpective Podcast. And yes, we get into Gary Vaynerchuk, TikTok, zero-click search, and why it might be time to completely rethink your content strategy.

    47 min
  7. 28/12/2025

    Shadow AI, Cybersecurity, and the Evolving Threat Landscape

    The cybersecurity landscape never sits still—and neither do the conversations I aim to have on the TechSpective Podcast. In the latest episode, I sit down with Etay Maor, Chief Security Strategist at Cato Networks and a founding member of Cato CTRL, the company’s cyber threats research lab. Etay brings a rare mix of technical depth and practical perspective—something increasingly necessary as we navigate the murky waters of modern cyber threats. This time, the conversation centers on the rise of Shadow AI—a topic gaining urgency but still underappreciated in many organizations. If Shadow IT was the quiet rule-breaker of the past decade, Shadow AI is its unpredictable, algorithmically supercharged cousin. It’s showing up in boardrooms, workflows, and marketing departments—often without security teams even knowing it’s there. Here’s the thing: banning AI tools or blocking access doesn’t work. People find a way around it. We’ve seen this play out with cloud storage, collaboration tools, and other “unsanctioned” technologies. The same logic applies here. Etay and I explore why organizations need to move beyond a binary yes/no mindset and instead think in terms of guardrails, visibility, and enablement. We also get into the tension between innovation and risk—how fear-based decision-making can put companies at a disadvantage, and why the bigger threat might be not using AI at all. That may sound counterintuitive coming from two people steeped in cybersecurity, but context matters. The risk of falling behind could be greater than the risk of exposure—if companies don’t take a strategic approach. Naturally, the conversation expands into how threat actors are adapting AI for offensive purposes—crafting more convincing phishing emails, automating reconnaissance, and even gaming defensive AI tools. Etay shares sharp insights into how attackers use our own tools against us and what that means for the future of cybersecurity. There’s also a philosophical thread woven throughout—questions about whether AI can truly be “original,” how human creativity intersects with machine learning, and what kind of ethical or regulatory frameworks might be needed (if any) to keep things from going off the rails. Etay brings both technical fluency and historical perspective to the discussion, making it a conversation that’s as grounded as it is thought-provoking. This episode doesn’t veer into fear-mongering or hype. It stays real—examining where we are, where we’re headed, and how to make better decisions as the ground keeps shifting. Whether you’re in security, tech leadership, policy, or just curious about how AI is reshaping the digital battleground, this one’s worth your time. Tune in to the latest TechSpective Podcast—now streaming on all major platforms. Share your thoughts in the comments below.

    58 min
  8. 23/12/2025

    Agentic AI and the Art of Asking Better Questions

    I’ve had a lot of conversations about AI over the past couple years—some insightful, some overhyped, and a few that left me questioning whether we’re even talking about the same technology. But every now and then, I get the opportunity to sit down with someone who not only understands the technology but also sees its broader implications with clarity and honesty. This episode of the TechSpective Podcast is one of those moments. Jeetu Patel, President and Chief Product Officer at Cisco, joins me for an unscripted, unfiltered conversation that covers more ground than I could have outlined in a set of pre-written questions. Actually, I did draft a set of pre-written questions. We just didn't follow or use them at all. Jeetu and I have known each other for a while, and this episode reflects the kind of conversation you only get with someone who’s deeply immersed in both the strategic and human sides of tech. It’s thoughtful. It’s philosophical. And it doesn’t pull punches. At the center of our discussion is the concept of “agentic AI”—a term that’s being used more frequently, sometimes without much clarity. We unpack what it actually means, what it can realistically do, and how it differs from the wave of chatbots and content generators that came before it. More importantly, we talk about how these AI agents might change not just the tasks we automate, but how we think about work itself. Of course, with any conversation about AI and the future of work comes the inevitable tension: what gets lost, what gets reimagined, and what still requires distinctly human judgment. Jeetu brings a nuanced take to this, rooted in his experience leading product innovation at one of the world’s largest tech companies. It’s not a conversation filled with predictions so much as it is a reframing of the questions we should be asking. What stood out to me is how quickly we normalize the extraordinary. A technology that felt magical two years ago is now embedded in our daily workflows. That speed of adoption changes the stakes. It means we need to be more deliberate—not just about what AI can do, but what we want it to do, and what we risk offloading too quickly. We also touch on the philosophical implications. If AI agents really can handle more of the cognitive heavy lifting, what’s our role in the loop? Do we become editors? Overseers? Explorers of new frontiers? And how do we prepare for jobs that don’t exist yet, using tools that are evolving faster than we can document them? I think this episode will resonate with anyone trying to navigate this moment—whether you’re in product development, policy, marketing, or just someone who likes to think a few moves ahead. It’s about more than AI. It’s about how we adapt, how we define value, and what we choose to hold onto as the landscape shifts. Give it a listen. And as always, I’d love to hear your thoughts.

    53 min

About

The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, sometimes riffing on pop‑culture debates like Star Wars vs. Star Trek or Xbox vs. PS—so it’s not all dry and serious.