Cyber Sentries: AI Insight to Cloud Security

Cyber Sentries explores the critical convergence of AI, cloud, and cybersecurity, diving deep into how these three pillars are actively redefining the modern Security Operations Center (SOC). As the threat landscape grows in complexity, we showcase the accelerating role of AI in defending cloud infrastructure, applications, and data. Join us as we illuminate this high-stakes intersection—a space where cutting-edge innovation meets the necessity for continuous vigilance—to transform how organizations approach resilience in a digital-first world.

  1. Built Fast, Broken Faster: MCP & AI App Security—with GitGuardian’s Gaetan Ferry

    19H AGO

    Built Fast, Broken Faster: MCP & AI App Security—with GitGuardian’s Gaetan Ferry

    When “Ship Fast” Meets “Secure by Design” in AI Apps AI-driven development is moving at breakneck speed—and attackers are taking advantage of the shortcuts. In this episode of Cyber Sentries: AI Insights for Cloud Security, host John Richards sits down with Gaetan Ferry, security researcher at GitGuardian, to unpack how modern AI tooling, MCP servers, and cloud platforms are reshaping the security landscape. The core problem: the same agentic workflows that boost productivity can also multiply identities, credentials, and blast radius if something goes wrong. After John and Gaetan set the stage, Gaetan walks through a real-world-style vulnerability chain involving smithery.ai, an MCP server registry/hosting platform. It’s a practical look at how “classic” web issues can still show up in brand-new AI ecosystems—and how one small weakness can cascade into bigger supply chain risk. Along the way, they explore why secret sprawl is accelerating, what attackers are hunting for, and why observability is becoming as essential for identities and tokens as it is for infrastructure. Why MCP Servers, OAuth, and Secret Sprawl Are Colliding A big theme is the tension between usability and security: teams want agents that can “do everything,” which often means broad permissions and long-lived credentials. Gaetan explains why adopting OAuth is directionally better than static API keys, but still not a silver bullet in a world where agents need delegated access and tokens inevitably “live somewhere.” John pushes on what builders can do now—especially when new frameworks (and new hype cycles) keep resetting hard-won security practices. The conversation lands on pragmatic guidance: reduce blast radius where you can, inventory identities and secrets, and invest in observability so you can respond fast when—not if—credentials leak. Note: This episode discusses breach scenarios and exploitation chains—be thoughtful about sharing internal security details and incident response specifics. Questions We Answer in This Episode How can a simple web flaw turn into an AI supply chain attack through MCP server hosting?Why doesn’t OAuth automatically “solve” agent security and credential risk?What does “limiting blast radius” look like when agents need broad permissions to be useful?How can observability help you detect and respond to secrets sprawl across AI tools?Key Takeaways Treat MCP servers and agent integrations like critical supply chain dependencies—because they are.Prefer short-lived, scoped credentials (OAuth when possible), but plan for token theft scenarios anyway.Reduce blast radius with least privilege, separation of duties, and segmented agent access.Build identity and secret observability so you can triage and remediate leaks quickly.The Bottom Line for AI Security Teams in 2026 If you’re experimenting with MCP servers or rolling out agentic workflows, this episode is a timely reminder that fundamentals still win. John and Gaetan make the case that “moving fast” doesn’t have to mean accepting unlimited credential risk—you can ship quickly while still tightening scopes, tracking identities, and watching where secrets spread. Tune in for the real-world examples and the practical mindset shift that helps teams stay productive without becoming the next supply chain headline. Links & Notes GitGuardianConnect with Gaetan on LinkedInState of Secrets Sprawl Report 2025State of Secrets Sprawl Report 2026 (coming later in March!)CyberProofLearn more about Paladin CloudGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (01:07) - Meet Gaetan Ferry (02:19) - Attacks (03:17) - Vulnerabilities (07:38) - One-Off or Widespread? (10:20) - Recommendations to Avoid (14:19) - Exploiting (16:50) - Resolving (23:13) - Path Forward (30:53) - Impact (34:48) - Year of Supply Chain Attacks (35:51) - Wrap Up

    39 min
  2. Identity in the AI Era: Managing Enterprise Risk in the Age of AI with Jasson Casey

    FEB 4

    Identity in the AI Era: Managing Enterprise Risk in the Age of AI with Jasson Casey

    The Evolution of Identity Security in the Age of AI In this episode of Cyber Sentries, John Richards sits down with Jasson Casey, CEO and co-founder of Beyond Identity, to explore the intersection of identity security, AI, and enterprise risk management. As organizations rapidly adopt AI tools and agents, the fundamental challenges of identity security are evolving—requiring both new approaches and a return to core principles. Identity: The Foundation of Modern Security Jasson explains how identity has become the root cause of most security incidents, with identity-based failures accounting for 80% of security tickets. The conversation explores how AI is transforming every role in modern organizations, while highlighting the security implications of this rapid adoption. Key Takeaways: Identity security is fundamental to managing AI risk in enterprisesTraditional security concepts still apply but require new implementation approachesOrganizations need to track data flow and permissions across AI systemsLooking Ahead As AI adoption accelerates, organizations must balance innovation with security. Through proper identity management and understanding of data flow, enterprises can prevent most security incidents while embracing the transformative potential of AI technologies. Links & Notes Beyond IdentityAI SolutionsConnect with Jasson Casey on LinkedInConnect with Jasson Casey on XCyberProofLearn more about Paladin CloudGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (01:02) - Meet Jasson Casey (02:51) - Regrets? (08:19) - Friction Point (10:28) - Identity (17:08) - Adoption (22:17) - The Hallmark of Network Security (28:10) - Paint Analogy (31:17) - Threats (34:08) - Visualization Tool (35:13) - Their Work in This Space (37:05) - Learning More (37:36) - Wrap Up

    39 min
  3. Security Data Pipelines: How to Cut SIEM Costs and Noise with Dina Kamal

    JAN 14

    Security Data Pipelines: How to Cut SIEM Costs and Noise with Dina Kamal

    SIEM Speed Without the Sprawl—DataBahn’s Take on Security Data Pipelines In this Cyber Sentries: AI Insights for Cloud Security episode, host John Richards sits down with Dina Kamal, Chief Revenue Officer at DataBahn, to tackle a familiar cloud security problem: teams can’t get the right data into the SIEM fast enough, and when they do, costs and noise spike. After the introductions, John and Dina dig into why data integration and parsing often consume most of the timeline in SIEM projects—and how a security data pipeline layer can compress onboarding from months to weeks. They also explore what “doing more with less” looks like in a modern SOC: filtering and routing data based on detection value, preserving what’s needed for compliance, and keeping flexibility for SIEM migrations. Dina’s bigger point is that AI only becomes truly useful when it’s paired with domain expertise and real operational context—otherwise it’s easy to end up with impressive-looking outputs that don’t hold up under investigation pressure. Questions We Answer in This Episode Why do SIEM projects stall on data onboarding, and what speeds it up?How can you cut SIEM ingestion costs without weakening detections?What does owning your security data change during SIEM migrations?Where does AI help most in SOC workflows, and where do guardrails matter?Key Takeaways Data pipelines remove SIEM “plumbing” bottlenecks by automating collection, parsing, and transformation.Cost reduction works best when you filter by security value, not just by volume.Decoupling data collection from the SIEM reduces lock-in and simplifies vendor changes.AI is strongest when guided by security context and experienced practitioners.The throughline is practical: better detections and faster investigations start upstream with intentional data handling. By treating the SIEM as a high-value analytics destination instead of a dumping ground, teams can regain capacity, reduce noise, and keep options open as tools and vendors change. And when AI is applied to the right parts of the workflow—with clear constraints and real-world context—it can accelerate outcomes without compromising trust. Links & Notes DataBahnConnect with Dina Kamal on LinkedInLearn more about CyberproofGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (01:02) - Meet Dina Kamal (03:14) - Data Pipeline Management (05:55) - The Target (07:32) - Changing Vendors (08:34) - No Storage (09:31) - Why People Need It (13:09) - Ahead of the Curve (19:54) - Capturing the Data (23:02) - Useful Data (26:02) - More with Less (27:03) - Visibility (29:40) - When to Start (31:04) - Wrap Up

    33 min
  4. Securing AI Agents: How to Stop Credential Leaks and Protect Non‑Human Identities with Idan Gour

    12/10/2025

    Securing AI Agents: How to Stop Credential Leaks and Protect Non‑Human Identities with Idan Gour

    Bridging the AI Security Gap—Inside the Rise of Non‑Human Identities In this episode of Cyber Sentries from CyberProof, host John Richards sits down with Idan Gour, co-founder and president of Astrix Security, to unpack one of today’s fastest-emerging challenges: securing AI agents and non-human identities (NHIs) in the modern enterprise. As companies rush to adopt generative-AI tools and deploy Model Context Protocol (MCP) servers, they’re unlocking incredible automation—and a brand-new attack surface. Together, John and Idan explore how credential leakage, hard-coded secrets, and rapid “shadow-AI” experimentation are exposing organizations to unseen risks, and what leaders can do to stay ahead. From Non‑Human Chaos to Secure‑by‑Design AI Idan shares the origin story of Astrix Security—built to close the identity-security gap left behind by traditional IAM tools. He explains how enterprises can safely navigate their AI journey using the Discover → Secure → Deploy framework for managing non-human access. The conversation moves from early automation risk to today’s complex landscape of MCP deployments, secret-management pitfalls, and just-in-time credentialing. John and Idan also discuss Astrix’s open-source MCP wrapper, designed to prevent hard‑coded credentials from leaking during model integration—a practical step organizations can adopt immediately. Questions We Answer in This Episode How can companies prevent AI‑agent credentials from leaking across cloud and development environments?What’s driving the explosion of non‑human identities—and how can security teams regain control?When should organizations begin securing AI agents in their adoption cycle?What frameworks or first principles best guide safe AI‑agent deployment?Key Takeaways Start securing AI agents early—waiting until “maturity” means you’re already behind.Visibility is everything: you can’t protect what you don’t know exists.Automate secret management and avoid static credentials through just‑in‑time access.Treat AI agents and NHIs as first‑class citizens in your identity‑security program.As AI adoption accelerates within every department—from R&D to customer operations—Idan emphasizes that non‑human identity management is the new frontier of cybersecurity. Getting that balance right means enterprises can innovate fearlessly while maintaining the integrity of their data, systems, and brand. Links & Notes Learn more about Paladin CloudLearn more about Astrix SecurityOpen Source MCP Secret WrapperIdan Gour on LinkedInGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (01:21) - Meet Idan Gour (03:36) - As the Vertical Started to Grow (06:37) - The Journey (09:24) - Struggling (13:18) - Risk (16:15) - Targeting (17:54) - Framework (20:18) - Implementing Early (21:52) - Back End Risks (24:04) - Bridging the Gap (26:13) - When to Engage Astrix (29:54) - Wrap Up

    33 min
  5. AI Compliance Security: How Modular Systems Transform Enterprise Risk Management with Richa Kaul

    11/12/2025

    AI Compliance Security: How Modular Systems Transform Enterprise Risk Management with Richa Kaul

    AI-Powered Compliance: Transforming Enterprise Security In this episode of Cyber Sentries, John Richards speaks with Richa Kaul, CEO and founder of Complyance. Richa shares insights on using modular AI systems for enterprise security compliance and discusses the critical balance between automation and human oversight in cybersecurity. Why Enterprise Security Compliance Matters Now The conversation explores how enterprises struggle with increasing cyber threats and complex third-party vendor networks. Richa explains how moving from reactive to proactive compliance monitoring can transform security posture, sharing real examples from Fortune 100 companies and major sports organizations. AI Implementation That Prioritizes Security Richa details their approach to implementing AI in compliance, emphasizing their commitment to data privacy and security. The company uses a modular AI infrastructure with opt-in features and minimal data access principles, demonstrating how AI can enhance security without compromising privacy. Questions We Answer: How can enterprises shift from reactive to proactive compliance monitoring?What are the key considerations for implementing AI in security compliance?How should companies manage third-party vendor risks in the AI era?What role does employee education play in maintaining security compliance?Key Takeaways: Continuous monitoring beats point-in-time compliance checksModular AI systems offer better security control than all-in-one solutionsThird-party vendor risk requires automated, continuous assessmentHuman elements like training and culture can't be fully automatedLooking Ahead: Security Challenges The discussion concludes with insights into future challenges, including quantum computing's impact on security and the growing complexity of AI-related risks. Richa emphasizes the importance of building nimble, configurable systems to address emerging threats. Links & Notes More About Richa KaulComplyance on LinkedIn and the WebLearn more about Paladin CloudLearn more about CyberproofGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (01:13) - Meet Richa Kaul from Complyance (02:32) - Areas Needing Security (04:19) - Reactive vs. Proactive (06:17) - Integrating AI (07:59) - AI Compliance Challenges (10:48) - Training Their Models (12:16) - Evaluating Third Parties (15:49) - The Team (19:04) - Looking to the Future (20:44) - How Others Are Implementing AI (24:04) - Creating Capacity (25:44) - Companies Doing It Well (27:25) - When They Don’t Have the Resources (28:50) - Wrap Up

    31 min
  6. AI Governance Essentials: Navigating Security and Compliance in Enterprise AI with Walter Haydock

    10/08/2025

    AI Governance Essentials: Navigating Security and Compliance in Enterprise AI with Walter Haydock

    AI Governance in an Era of Rapid Change In this episode of Cyber Sentries, John Richards talks with Walter Haydock, founder of StackAware, about navigating the complex landscape of AI governance and security. Walter brings unique insights from his background as a Marine Corps intelligence officer and his extensive experience in both government and private sectors. Understanding AI Risk Management Walter shares his perspective on how organizations can develop practical AI governance frameworks while balancing innovation with security. He outlines a three-step approach starting with policy development, followed by thorough inventory of AI tools, and assessment of cybersecurity implications. The discussion explores how different industries face varying levels of AI risk, with healthcare emerging as a particularly challenging sector where both opportunities and dangers are amplified. Walter emphasizes the importance of aligning AI governance with business objectives rather than treating it as a standalone initiative. Questions We Answer in This Episode: How should organizations approach AI governance and risk management?What are the key challenges in implementing ISO 42001 for AI systems?How can companies address the growing problem of "shadow AI"?What are the implications of fragmented AI regulations across different jurisdictions?Key Takeaways: Organizations need clear AI policies that define acceptable use boundariesRisk management should integrate with existing frameworks rather than create separate systemsCompanies must balance compliance requirements with innovation needsEmployee education and flexible approval processes help prevent shadow AI usageThe Regulatory Landscape The conversation delves into emerging AI regulations, from New York City's local laws to Colorado's comprehensive AI Act. Walter provides valuable insights into how organizations can prepare for upcoming regulatory changes while maintaining operational efficiency. Links & Notes StackAwareConnect with Walter on LinkedInLearn more about Paladin CloudGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (00:30) - Walter Haydock from Stackaware (01:13) - Walter’s Background (02:36) - Areas Needing Improvement (03:23) - Integrating AI (04:33) - Stackaware’s Role (06:25) - AI Certification Standard (07:17) - Implementation Challenges (08:28) - Thoughts on Looser Protocols (11:16) - Regulations (13:01) - Approaches (14:57) - Areas of Concern (17:26) - Handling Risk (18:37) - Who Should Own AI Governance (19:43) - Pushback? (21:15) - Proper Techniques (22:26) - What Levels (23:49) - Smaller Companies (25:54) - Ideal Legislation (28:48) - Plugging Walter (29:36) - Wrap Up

    31 min
  7. Distributed AI Security: How Enterprise Systems Are Evolving for AI Integration with Mark Fussell

    09/10/2025

    Distributed AI Security: How Enterprise Systems Are Evolving for AI Integration with Mark Fussell

    Revolutionizing Cloud Security with AI-Powered Distributed Systems In this episode of Cyber Sentries, John Richards sits down with Mark Fussell, CEO of Diagrid and co-creator of the Distributed Application Runtime (DAPR). Mark shares insights from his extensive experience in distributed systems and discusses how modern architectures are evolving to incorporate AI capabilities. The Evolution of Distributed Applications Mark explains how DAPR emerged from observing common challenges teams faced when building distributed systems. The project, which started in 2018 and became open source in 2019, has grown into a graduated Cloud Native Computing Foundation (CNCF) project used by thousands of companies worldwide. He details how DAPR's component model allows teams to swap infrastructure without changing code, providing crucial flexibility for enterprise systems. Questions We Answer in This Episode How are distributed applications transforming modern software development?What role does security play in distributed architectures?How can organizations integrate AI agents into existing distributed systems?What's next for distributed systems in the age of AI?Key Takeaways DAPR provides essential building blocks for secure, distributed applicationsWorkflow durability is crucial for enterprise-ready AI agent systemsIdentity-based security principles are fundamental to distributed architecturesThe future of distributed systems will blend traditional microservices with AI agentsThe Future of AI in Distributed Systems Mark discusses Diagrid's Catalyst platform, which helps organizations build enterprise-ready distributed applications with integrated AI capabilities. He emphasizes the importance of security, durability, and workflow management as organizations begin incorporating AI agents into their systems. Links & Notes Connect with Mark on LinkedInLearn more about DAPRDiagridLearn more about Paladin CloudGot a question? Ask us here! (00:00) - Welcome to Cyber Sentries (00:30) - Diagrid’s Mark Fussell (01:07) - Meet Mark (04:37) - The Journey (10:55) - New AI Models (15:01) - On the Security Side (16:52) - Where Things Go Next (20:10) - Bringing in New Agentic Models (24:20) - Catalyst (27:12) - Getting in Touch (28:35) - Wrap Up

    30 min
  8. AI Security Architecture: How Data-Centric Models Transform Enterprise Security with Mohit Tiwari

    08/13/2025

    AI Security Architecture: How Data-Centric Models Transform Enterprise Security with Mohit Tiwari

    AI-Powered Cloud Security: From Research Lab to Enterprise Reality In this episode of Cyber Sentries, John Richards talks with Mohit Tiwari, co-founder and CEO of Symmetry Systems and associate professor at UT Austin, about transforming academic research into practical enterprise security solutions. Mohit shares his journey from academic research to founding a company that's revolutionizing how organizations approach data security in the age of AI. Bridging Academia and Industry Mohit discusses how his research team at UT Austin developed innovative approaches to data security and privacy, working with organizations like NSA, Lockheed, and General Dynamics. Their work led to founding Symmetry Systems in 2020, focusing on operationalizing data flow security across enterprise environments. The Evolution of Data Security The conversation explores how traditional asset-centric security approaches are giving way to data-centric models. Mohit explains how Symmetry Systems helps organizations protect data flows across multiple applications and platforms, making security more efficient and effective than traditional bespoke solutions. Questions We Answer in This Episode: How can organizations move from bespoke security solutions to systematic approaches?What role does AI governance play in modern enterprise security?How can companies effectively manage data security across different AI implementation scenarios?Key Takeaways: Data-centric security approaches are becoming crucial as AI adoption increasesOrganizations need interoperable policy languages for effective AI governancePurpose-built, smaller AI models can be more effective than large, general-purpose onesSecurity solutions must evolve to handle the massive scale of modern enterprise dataLooking Ahead: The Future of AI Security The episode concludes with insights into emerging challenges in AI security, including the need for better business purpose frameworks and advanced detection capabilities for sophisticated attacks like ransomware. Resources Symmetry Systems websiteConnect with Symmetry Systems on LinkedInLearn more about Paladin CloudLearn more about CyberproofGot a question? Ask us here! (00:04) - Welcome to Cyber Sentries (01:02) - Meet Mohit (03:06) - Application Examples (08:15) - Key Metrics (10:52) - Effects of AI (14:16) - Environments and Interfaces (16:39) - Tying It Together (18:19) - AI in the Process (22:51) - Model Decisions (25:41) - Research to Project (29:13) - Problems (31:25) - Wrap Up

    34 min

Ratings & Reviews

5
out of 5
7 Ratings

About

Cyber Sentries explores the critical convergence of AI, cloud, and cybersecurity, diving deep into how these three pillars are actively redefining the modern Security Operations Center (SOC). As the threat landscape grows in complexity, we showcase the accelerating role of AI in defending cloud infrastructure, applications, and data. Join us as we illuminate this high-stakes intersection—a space where cutting-edge innovation meets the necessity for continuous vigilance—to transform how organizations approach resilience in a digital-first world.

More From TruStory FM