Security Intelligence Podcast

IBM

Security Intelligence is a weekly news podcast for cybersecurity pros who need to stay ahead of fast-moving threats. Each week, we cover the latest threats, trend, and stories shaping the digital landscape, alongside expert insights that help make sense of it all. Whether you’re a builder, defender, business leader or simply curious about how to stay secure in a connected world, you’ll find timely updates and timeless principles in an accessible, engaging format. New episodes weekly on Wednesdays at 6am EST.

  1. Should you let OpenClaw pen test your system? Plus: Cybersecurity for ephemeral software

    APR 22

    Should you let OpenClaw pen test your system? Plus: Cybersecurity for ephemeral software

    Learn more about how enterprises confront agentic attacks → https://newsroom.ibm.com/2026-04-15-ibm-announces-new-cybersecurity-measures-to-help-enterprises-confront-agentic-attacks Sophos let OpenClaw run wild on its network (sort of). It wasn’t as bad an idea as it sounds! With a few guardrails and restrictions in place, the security software firm turned OpenClaw into a serious little pen tester, surfacing “23 actionable, high-quality findings.” But is this a sustainable model for introducing AI agents to the security process? And how do we deal with the inevitable friction between a model meant to find exploits and the guardrails telling it to do no harm? This week, host Matt Kosinski and panelists Claire Nuñez, Dave McGinnis and Kimmie Farrington discuss the wisdom and folly of letting an AI agent pen test your system. Plus: We dig into Bruce Schneier’s thoughts on “security in the age of instant software” and a report from CipherCue that ransomware is growing three times faster than security spending. All that and more on Security Intelligence. Segments: 00:00 – Intro 1:07 -- OpenClaw as a pen tester 14:23 -- Cybersecurity for instant software 25:36 -- Ransomware outpaces security spending The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Follow the Security Intelligence podcast on your preferred platform → https://www.ibm.com/think/podcasts/security-intelligence

    36 min
  2. Claude Mythos: Marketing hype or the end of cybersecurity?

    APR 15

    Claude Mythos: Marketing hype or the end of cybersecurity?

    Anthropic says its newest AI model, Claude Mythos, has found thousands of zero-day vulnerabilities across every major OS and web browser. It's so powerful, they won't release it publicly. Instead, they’re restricting access to a handful of trusted partners, who get to experiment with Mythos through a new initiative called Project Glasswing. Where does that leave the rest of us, who don’t get to tinker with perhaps the most advanced model yet? This week on Security Intelligence, Sridhar Muppidi, Michelle Alvarez, and Dustin “EvilMog” Heywood join host Matt Kosinski to discuss what Mythos and Glasswing really mean for the average security pro. How much is hype? How much is the real deal? And how could this limited release backfire? Then: The FBI’s 2025 Internet Crime Report saw scam losses jump 26%, and Accenture found a 127% increase in malicious hackers trying to recruit the employees of their targets. All that and more on Security Intelligence. Segments: 00:00 – Intro 1:22 -- Claude Mythos and Project Glasswing 12:26 -- The 2025 Internet Crime Report 20:19 -- Attackers recruiting more insiders The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Read more about AI raising cybersecurity stakes → https://www.ibm.com/think/news/anthropic-claude-ai-mythos-project-glasswing-raises-stakes-cybersecurity Follow the Security Intelligence podcast on your preferred platform → https://www.ibm.com/think/podcasts/security-intelligence

    30 min
  3. The Claude Code source code leak: Takeaways for cybersecurity pros

    APR 8

    The Claude Code source code leak: Takeaways for cybersecurity pros

    What happens when one of the world’s most popular AI coding tools falls into the wrong hands? On this episode of Security Intelligence, Nick Bradley, Dave Bales and JR Rao discuss the Claude Code source code leak. Attackers are already using the opportunity to spread malware through fake repos, but the real question is how threat actors might use their newfound knowledge of Claude Code’s internals to wreak havoc on AI agents and the CI/CD pipeline. Then, we follow up on our old friends TeamPCP, Shiny Hunters and Lapsus$, whose overlapping data breach claims are causing no small amount of confusion and consternation among security pros. We examine the credential rotation problem and the uneven security surface of modern supply chains that helped get us in this mess. Plus: Threat intelligence usually focuses on attacks that did happen. But what if we started talking about the ones that didn’t? And do cybercriminals have anything to teach us about “mature” AI adoption? Some big names seem to think so. All that and more on Security Intelligence. Segments: 00:00 – Intro 1:12 -- The Claude Code leak 11:19 -- TeamPCP’s breach spree 21:21 -- “Close-call” databases 29:28 -- Cybercrime and AI adoption The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Follow the Security Intelligence podcast on your preferred platform: https://www.ibm.com/think/podcasts/security-intelligence

    42 min
  4. RSA recap, the LiteLLM breach, and the quest to fix AI

    APR 1

    RSA recap, the LiteLLM breach, and the quest to fix AI

    LiteLLM is a nifty little Python library that gives you access to about 100 different AI services through one API. It gets an estimated 3.4 million downloads a day. And last week, it was turned into a Trojan horse, distributing infostealers to hundreds of thousands of devices. (At least, that’s what TeamPCP says—the hackers behind the LiteLLM breach and a slew of other high-profile software supply chain attacks in recent weeks.) Quote Andrej Karpathy: This is “basically the scariest thing imaginable in modern software.” On this episode of Security Intelligence, Suja Viswesan, Dave McGinnis and Jeff Crume help us break down the LiteLLM breach and the broader campaign TeamPCP is waging. We’re also joined by HashiCorp Field CTO Jake Lundberg in the first segment for a discussion of how organizations are trying—with varying degrees of success—to tackle the agentic AI problem. AI agents are identities—but identities our existing frameworks weren’t built to house. Simply porting existing human and non-human identity management practices onto them won’t cut it. But the question remains: What do we need instead? All that and more on Security Intelligence. 00:00 – Intro1:13 – Who will fix AI agent security? 21:17 – RSAC 2026 Recap 29:31 – 2026’s most dangerous cyberattacks 40:45 – The LiteLLM breach The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.

    49 min

About

Security Intelligence is a weekly news podcast for cybersecurity pros who need to stay ahead of fast-moving threats. Each week, we cover the latest threats, trend, and stories shaping the digital landscape, alongside expert insights that help make sense of it all. Whether you’re a builder, defender, business leader or simply curious about how to stay secure in a connected world, you’ll find timely updates and timeless principles in an accessible, engaging format. New episodes weekly on Wednesdays at 6am EST.

You Might Also Like