Cloud Security Podcast by Google

Anton Chuvakin

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit. We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing.

  1. 21 HR AGO

    EP262 Freedom, Responsibility, and the Federated Guardrails: A New Model for Modern Security

    Guest: Alex Shulman-Peleg, Global CISO at Kraken  Topics: You mentioned that centralized security can't work anymore. Can you elaborate on the key changes—driven by cloud, SaaS, and AI—that have made this traditional model unsustainable for a modern organization? Why do some persist at centralized, top down approach to security, despite that? What do you mean by "Freedom, Responsibility and distributed security"?  Can you explain the difference between "centralized security" and what you define as "security with distributed ownership"?  Is this the same "federated"? In our conversation you mentioned "cloud and AI- native", what do you mean by this (especially "AI-native") and how is this changing your approach to security?  You introduce the concept of "Security as quality" suggesting that a security-unaware developer is essentially a bad software developer. How do you shift the culture and internal metrics to make security an inherent quality standard, rather than a separate, compliance-driven checklist? You likened the central security team's new role to a "911 emergency service." Beyond incident response, what stays central no matter what, and how does the central team successfully influence the security posture of the entire organization without being directly responsible for the day-to-day work. Resources: Video version EP129 How CISO Cloud Dreams and Realities Collide EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen EP212 Securing the Cloud at Scale: Modern Bank CISO on Metrics, Challenges, and SecOps

    29 min
  2. 12 JAN

    EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen

    Guest: Royal Hansen, VP of Engineering at Google, former CISO of Alphabet Topics: The "God-Like Designer" Fallacy: You've argued that we need to move away from the "God-like designer" model of security—where we pre-calculate every risk like building a bridge—and towards a biological model. Can you explain why that old engineering mindset is becoming risky in today's cloud and AI environments? Resilience vs. Robustness: In your view, what is the practical difference between a robust system (like a fortress that eventually breaks) and a resilient system (like an immune system)? How does a CISO start shifting their team's focus from creating the former to nurturing the latter? Securing the Unknown: We're entering an era where AI agents will call other agents, creating pathways we never explicitly designed. If we can't predict these interactions, how can we possibly secure them? What does "emergent security" look like in practice? Primitives for Agents: You mentioned the need for new "biological primitives" for these agents—things like time-bound access or inherent throttling. Are these just new names for old concepts like Zero Trust, or is there something different about how we need to apply them to AI? The Compliance Friction: There's a massive tension between this dynamic, probabilistic reality and the static, checklist-based world of many compliance regimes. How do you, as a leader, bridge that gap? How do you convince an auditor or a board that a "probabilistic" approach doesn't just mean "we don't know for sure"?  "Safe" Failures: How can organizations get comfortable with the idea of designing for allowable failure in their subsystems, rather than striving for 100% uptime and security everywhere? Resources: Video version EP189 How Google Does Security Programs at Scale: CISO Insights BigSleep and CodeMender agents "Chasing the Rabbit" book   "How Life Works: A User's Guide to the New Biology" book

    32 min
  3. 5 JAN

    EP257 Beyond the 'Kaboom': What Actually Breaks When OT Meets the Cloud?

    Guest: Chris Sistrunk, Technical Leader, OT Consulting, Mandiant Topics: When we hear "attacks on Operational Technology (OT)" some think of Stuxnet targeting PLCs or even backdoored pipeline control software plot in the 1980s. Is this space always so spectacular or are there less "kaboom" style attacks we are more concerned about in practice? Given the old "air-gapped" mindset of many OT environments, what are the most common security gaps or blind spots you see when organizations start to integrate cloud services for things like data analytics or remote monitoring? How is the shift to cloud connectivity - for things like data analytics, centralized management, and remote access -  changing the security posture of these systems? What's a real-world example of a positive security outcome you've seen as a direct result of this cloud adoption? How do the Tactics, Techniques, and Procedures outlined in the MITRE ATT&CK for ICS framework change or evolve when attackers can leverage cloud-based reconnaissance and command-and-control infrastructure to target OT networks? Can you provide an example? OT environments are generating vast amounts of operational data. What is interesting for OT Detection and Response (D&R)? Resources: Video version Cybersecurity Forecast 2026 report by Google Complex, hybrid manufacturing needs strong security. Here's how CISOs can get it done blog "Security Guidance for Cloud-Enabled Hybrid Operational Technology Networks" paper by Google Cloud Office of the CISO DEF CON 23 - Chris Sistrunk - NSM 101 for ICS  MITRE ATT&CK for ICS

    27 min
  4. 08/12/2025

    EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking

    Guest: Heather Adkins, VP of Security Engineering, Google Topic: The term "AI Hacking Singularity" sounds like pure sci-fi, yet you and some other very credible folks are using it to describe an imminent threat. How much of this is hyperbole to shock the complacent, and how much is based on actual, observed capabilities today?  Can autonomous AI agents really achieve that "exploit - at - machine - velocity" without human intervention for the zero-day discovery phase? On the other hand, why may it actually not happen? When we talk about autonomous AI attack platforms, are we talking about highly resourced nation-states and top-tier criminal groups, or will this capability truly be accessible to the average threat actor within the next 6-12 months? What's the "Metasploit" equivalent for AI-powered exploitation that will be ubiquitous?  Can you paint a realistic picture of the worst-case scenario that autonomous AI hacking enables? Is it a complete breakdown of patch cycles, a global infrastructure collapse, or something worse? If attackers are operating at "machine speed," the human defender is fundamentally outmatched. Is there a genuine "AI-to-AI" counter-tactic that doesn't just devolve into an infinite arms race? Or can we counter without AI at all? Given that AI can expedite vulnerability discovery, how does this amplified threat vector impact the software supply chain? If a dependency is compromised within minutes of a new vulnerability being created, does this force the industry to completely abandon the open-source model, or does it demand a radical, real-time security scanning and patching system that only a handful of tech giants can afford? Are current proposed regulations, like those focusing on model safety or disclosure, even targeting the right problem?  If the real danger is the combinatorial speed of autonomous attack agents, what simple, impactful policy change should world governments prioritize right now? Resources: "Autonomous AI hacking and the future of cybersecurity" article EP20 Security Operations, Reliability, and Securing Google with Heather Adkins Introducing CodeMender: an AI agent for code security EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks? Daniel Miessler site and podcast "How SAIF can accelerate secure AI experiments" blog "Staying on top of AI Developments" blog

    30 min

About

Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure. We're going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject's benefit or just for organizational benefit. We hope you'll join us if you're interested in where technology overlaps with process and bumps up against organizational design. We're hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can't keep as the world moves from on-premises computing to cloud computing.

You Might Also Like