Modern Cyber with Jeremy Snyder

Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry. Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.

  1. This Week in AI Security - 5th March 2026

    3D AGO

    This Week in AI Security - 5th March 2026

    In this week's episode, Jeremy records straight from the sidelines of the [un]prompted security conference in San Francisco. Before diving into his key takeaways from the event, he covers a massive, AI-assisted data breach and a critical shift in how Google API keys must be handled. Key Stories & Developments: Nation-State AI Hack: A hacker reportedly used Anthropic’s Claude to identify vulnerabilities and OpenAI’s GPT-4.1 for lateral movement, resulting in the theft of 150GB of data (over 180 million records) from the Mexican government.MCP Infrastructure Flaws: An unauthenticated Server-Side Request Forgery (SSRF) flaw leading to Remote Code Execution (RCE) was found in a widely used Atlassian MCP.The Gemini API Key Crisis: A flaw in the Gemini AI panel allowed browser extensions to escalate privileges. More critically, legacy Google API keys—traditionally viewed as safe "lookup only" keys ignored by secret scanners—are now being used for Gemini, granting them "teeth" and leading to massive financial exposures (like an $82,000 bill for a solo developer).Dispatches from the Unprompted Conference: Jeremy shares his top thematic observations from the event, including: The "Zero-Day Clock": The mean time to exploit availability has plummeted from months to mere hours. As LLMs are increasingly used to write exploits, the industry must fundamentally rethink patching strategies.LLMs Finding Legacy Bugs: Researchers demonstrated LLMs uncovering vulnerabilities in massive software projects that have evaded human detection for decades—some predating the invention of Git.Treating Prompts as Code: A key takeaway from Google's Gemini workspace team: as prompts become the primary instruction set for executing tasks, developers must apply traditional secure coding hygiene and logic to their prompt engineering.Episode Links https://www.bloomberg.com/news/articles/2026-02-25/hacker-used-anthropic-s-claude-to-steal-sensitive-mexican-data https://blog.pluto.security/p/mcpwnfluence-cve-2026-27825-critical https://cyberpress.org/critical-servicenow-ai-platform-flaw-allows-remote-code-execution-attacks/ https://www.darkreading.com/endpoint-security/bug-google-gemini-ai-panel-hijacking https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules https://boingboing.net/2026/02/27/stolen-gemini-api-key-racks-up-82000-in-48-hours-for-solo-dev.htmlhttps://unpromptedcon.org/ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    14 min
  2. Caleb Sima of WhiteRabbit

    4D AGO

    Caleb Sima of WhiteRabbit

    In this episode of Modern Cyber, Jeremy is joined by cybersecurity veteran Caleb Sima for a deep dive into the practical realities of securing AI inside organizations. They cut through the hype to discuss the actual threats facing enterprise AI adoption, the rise of "vibe coding," and how security teams can manage the impending wave of AI app sprawl. Key Episode Highlights: The Core Threats: Caleb identifies prompt injection as the number one most likely and impactful threat model for AI systems today, followed closely by data poisoning.The Rise of "App Sprawl": As employees across departments like HR and Finance use AI to build their own functional applications, organizations will face a massive shadow IT challenge without proper deployment pipelines.Defending the Inputs and Outputs: Managing AI security requires an approach similar to handling cross-site scripting, monitoring the inputs coming from untrusted sources and analyzing the outputs to prevent unauthorized actions.Getting Back to Basics: To secure AI, organizations must start with foundational visibility, establishing AI councils, and routing all LLM traffic through centralized enterprise gateways or firewalls.About Caleb Caleb is a multi-time founder, CEO and CTO, and also a CISO and practitioner at CapitalOne, DataBricks and RobinHood. Caleb has also recently started his own cyber investment firm, WhiteRabbit. At his core, Caleb is an engineer who loves problem-solving, getting into the weeds at the keyboard, and building things that matter. Episode Links Caleb Sima on LinkedIn: https://www.linkedin.com/in/calebsima/WhiteRabbit: https://wr.vc/

    43 min
  3. This Week in AI Security - 26th February 2026

    FEB 26

    This Week in AI Security - 26th February 2026

    In this episode of This Week in AI Security for February 26, 2026, Jeremy covers another packed week featuring AI privacy boundary failures, agent-driven outages, AI-accelerated cybercrime, Android malware innovation, platform responsibility debates, and the continued risks of vibe-coded applications. Key Stories & Developments: Microsoft Copilot Confidential Email Bug: Microsoft Copilot was found summarizing confidential emails due to a flaw in the Copilot Chat “Work” tab. AI Agent Triggers AWS Bedrock Outage: An outage involving Amazon Bedrock exposed the risks of agentic coding systems with broad permissions.AI-Powered Assembly Line for Cybercrime: A Russian-speaking attacker breached FortiGate firewalls across 55 countries in just five weeks using AI as a force multiplier.PromptSpy: Android Malware Using Live LLM Command & Control: PromptSpy became the first known Android malware to dynamically leverage Google Gemini at runtime. Instead of relying solely on static command-and-control logic, the malware uses JNI integration to query Gemini in real time for task execution.ChatGPT, Mental Health, and Law Enforcement Boundaries: Following a shooting incident in Tumbler Ridge, Canada, investigators discovered significant usage of ChatGPT by the suspect prior to the event. Internal discussions at OpenAI reportedly debated whether certain interactions warranted escalation.LLM-Generated Passwords Lack Entropy: Security researchers highlighted that passwords generated by LLMs exhibit approximately 80% less entropy than those created by traditional password generators.Vibe-Coded Security Suite Exposes Master Keys: A Reddit thread revealed that a suite of “RR”-branded tools were entirely vibe-coded applications with severe security flaws. Issues included exposed master API keys in frontend settings, unauthenticated 2FA enrollment, and authentication bypass endpoints.Anthropic Moves from Detection to Remediation: Anthropic introduced tooling aimed at moving beyond passive source-code analysis toward automated remediation of vulnerabilities.Episode Links https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/ https://www.thestandard.com.hk/tech-and-startup/article/324872/Amazons-cloud-unit-hit-was-hit-by-least-two-outages-involving-AI-tools-in-December-FT-says https://www.reuters.com/business/retail-consumer/amazons-cloud-unit-hit-by-least-two-outages-involving-ai-tools-ft-says-2026-02-20/ https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/ https://cyberandramen.net/2026/02/21/llms-in-the-kill-chain-inside-a-custom-mcp-targeting-fortigate-devices-across-continents/ https://www.bleepingcomputer.com/news/security/promptspy-is-the-first-known-android-malware-to-use-generative-ai-at-runtime/ https://techcrunch.com/2026/02/21/openai-debated-calling-police-about-suspected-canadian-shooters-chats/ https://www.techradar.com/pro/security/dont-trust-ai-to-come-up-with-a-new-strong-password-for-you-llms-are-pretty-poor-at-creating-new-logins-experts-warn https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_stacks/ https://www.anthropic.com/news/claude-code-security

    15 min
  4. This Week in AI Security - 19th February 2026

    FEB 19

    This Week in AI Security - 19th February 2026

    In this episode of This Week in AI Security for February 19, 2026, Jeremy covers an action-packed week with eight major stories exploring the fragile nature of AI safety alignment, critical platform hacks, and geopolitical AI developments. Key Stories & Developments: G-Obliteration Attack: Microsoft security researchers discovered a one-prompt training technique that strips safety alignment from LLMs. By leveraging Group Relative Policy Optimization (GRPO), attackers can use a single mild prompt to cause cross-category generalization of harm. This effectively removes guardrails across 15 open-source models while preserving their utility.Orchids Vibe-Coding Hack: A BBC reporter was hacked on Orchids, a popular "vibe-coding" platform. A security researcher demonstrated a malicious code injection that compromised the user's development environment.AI vs. Legacy Email Security: AI-powered cyberattacks are successfully bypassing 88% of legacy email security systems. Attackers are utilizing LLMs to generate highly authentic phishing and impersonation content at scale.AI Doctors Evade Privacy Rules: AI-powered health services are not subject to the same strict privacy regulations as traditional healthcare facilities. This raises concerns around data leaks and medical hallucinations.OpenClaw Info Stealer: A variant of the Vidar info-stealer is targeting the OpenClaw ecosystem. The attack aims to exfiltrate configuration files and gateway authentication tokens.OpenClaw Founder Joins OpenAI: Peter Steinberger, the creator of the OpenClaw framework, has joined OpenAI. The OpenClaw project will transition to an open-source foundation supported by OpenAI.Claude's Geopolitical Role: Reports indicate that Anthropic's Claude was utilized via the Palantir platform during a US military raid in Venezuela. This raid led to the capture of Nicolas Maduro.ASIS AI Safety Report 2026: The International AI Safety Report highlights three emerging risks. These include the lowered barrier for biological weapons, the surge in deepfakes and fraud, and the difficulty of safety research.Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo Episode Links https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/ https://www.bbc.com/news/articles/cy4wnw04e8wo https://www.cpapracticeadvisor.com/2026/02/09/study-ai-powered-cyber-attacks-hit-88-of-legacy-email-security-systems/177694/ https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/ https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/ https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/

    13 min
  5. This Week in AI Security - 12th February 2026

    FEB 12

    This Week in AI Security - 12th February 2026

    In this episode of This Week in AI Security, Jeremy covers a concise but critical set of stories for the week of February 12, 2026. From physical world prompt injections targeting autonomous vehicles to massive data leaks in consumer AI wrappers, the intersection of AI and infrastructure remains the primary battleground. Key Stories & Developments: Prompt Injecting Autonomous Vehicles: Researchers at UCSC and Johns Hopkins have demonstrated that autonomous cars and drones can be compromised by "visual" prompt injections placed on physical signs, causing them to ignore traffic rules or misinterpret their surroundings.Massive Chat App Leak: The "Chat & Ask AI" wrapper application exposed 300 million messages belonging to 25 million users due to a simple Firebase misconfiguration that allowed unauthenticated access to read, modify, and delete data.Docker AI Metadata Attacks: A new vulnerability in Docker's AI assistant allows attackers to trigger exploits by planting malicious instructions within container image metadata.Claude Opus 4.6 vs. Security: Anthropic's latest model, Claude Opus 4.6, has demonstrated a frightening new capability: finding high-severity vulnerabilities and logic bugs via reasoning (rather than fuzzing) without needing specialized prompting or scaffolding. Worried about OpenClaw on your network? The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach. Scan Your Network for Shadow Agents Now https://www.firetail.ai/schedule-your-demo Episode Links https://www.theregister.com/2026/01/30/road_sign_hijack_ai/ https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users https://www.govinfosecurity.com/docker-ai-bug-lets-image-metadata-trigger-attacks-a-30709 https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting https://red.anthropic.com/2026/zero-days/

    13 min
  6. This Week in AI Security - 5th February 2026

    FEB 5

    This Week in AI Security - 5th February 2026

    In this first episode of February 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent. Key Stories & Developments: Operation Bizarre Bazaar: Threat actors are actively targeting exposed LLM infrastructure to steal computing resources for cryptocurrency mining and resell API access on dark markets, attempting to pivot into internal systems via compromised MCP servers.Gemini MCP Tool Exploit: A critical Remote Code Execution (RCE) vulnerability was identified in a Gemini Model Context Protocol (MCP) tool, highlighting the recurring theme that the infrastructure powering LLMs remains a primary weak point.MoltBook API Leak: Researchers discovered a hardcoded Supabase API key in "MoltBook," a social network for AI agents. This flaw granted unauthenticated access to the entire production database, exposing over 1.5 million API keys.Bondu AI Toy Breach: A privacy failure in an AI-powered dinosaur toy left 50,000 chat log records exposed to anyone with a Gmail account, underscoring the lack of robust authentication in consumer AI IoT devices.CISA Chief's Data Mishandling: Reports surfaced that the acting head of the country's cyber defense agency uploaded sensitive "official use only" documents into a public version of ChatGPT, bypassing enterprise controls and security protocols. Worried about OpenClaw on your network? The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach. Scan Your Network for Shadow Agents Now https://www.firetail.ai/schedule-your-demo Episode Links https://www.bleepingcomputer.com/news/security/hackers-hijack-exposed-llm-endpoints-in-bizarre-bazaar-operation/ https://darkwebinformer.com/cve-2026-0755-reported-zero-day-in-gemini-mcp-tool-could-allow-remote-code-execution/ https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys https://ai.plainenglish.io/clawdbot-security-guide-de77b45ab719 https://blackoutvpn.au/blog/dont-buy-internet-connected-toys https://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361

    12 min
  7. This Week in AI Security - 29th January 2026

    JAN 29

    This Week in AI Security - 29th January 2026

    In this final episode of January 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent. Key Stories & Developments: Chainlit Framework Flaws: Two critical CVEs were identified in Chainlit, a popular Python package for building enterprise chatbots. These vulnerabilities, including Arbitrary File Read and Server-Side Request Forgery (SSRF), highlight the supply chain risks inherent in the rapidly growing AI development ecosystem.Google Gemini Workspace Exploit: Researchers demonstrated how Gemini can be manipulated via malicious calendar invites. By embedding hidden instructions (similar to Ascii or emoji smuggling), attackers can trick the AI into exfiltrating sensitive user data, such as meeting details and attachments.VS Code "Spyware" Plugins: Over 1.5 million developers were potentially exposed to malicious VS Code extensions impersonating ChatGPT. These plugins serve as "watering hole" attacks designed to harvest sensitive environment variables, credentials, and deployment keys.Vertex AI Privilege Escalation: A novel attack chain in Google’s Vertex AI was disclosed. Attackers used a malicious reverse shell in a reasoning engine function to escalate privileges via the Instance Metadata Service, gaining master access to chat sessions, storage buckets, and logs.The "Cloudbot" Warning: A deep dive into Cloudbot (now rebranded as ClawdBot), a general-purpose AI agent. Researchers found hundreds of instances sitting wide open on the internet, many providing full root shell access and exposing personal conversation histories and API keys.Episode Links https://www.theregister.com/2026/01/20/ai_framework_flaws_enterprise_clouds/https://www.securityweek.com/weaponized-invite-enabled-calendar-data-theft-via-google-gemini/https://cybernews.com/security/fake-chatgpt-vscode-extensions-compromised-developers/https://gbhackers.com/google-vertex-ai-flaw/https://www.insurancejournal.com/magazines/mag-features/2026/01/26/855293.htmhttps://arxiv.org/pdf/2601.10338https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/https://securityboulevard.com/2026/01/clawdbot-is-what-happens-when-ai-gets-root-access-a-security-experts-take-on-silicon-valleys-hottest-ai-agent/https://jpcaparas.medium.com/hundreds-of-clawdbot-instances-were-exposed-on-the-internet-heres-how-to-not-be-one-of-them-63fa813e6625https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    24 min
  8. Sydney Marrone of Nebulock

    JAN 28

    Sydney Marrone of Nebulock

    In this episode of Modern Cyber, Jeremy is joined by Sydney Marrone, a premier expert in the field of threat hunting and the Head of Threat Hunting at Nebulock. The conversation explores the rapidly evolving intersection of threat hunting and artificial intelligence, specifically focusing on how AI agents are transforming the speed and efficacy of defensive operations. Sydney shares her journey from "crawling under desks" in IT to building elite threat hunting teams at major organizations like Lumen (formerly CenturyLink) and Splunk. She breaks down her newly released Agentic Threat Hunting Framework (ATHF) and the LOCK pattern (Learn, Observe, Check, Keep), explaining how AI can condense a hunt that previously took four weeks into a mere 45 minutes. They also discuss the critical need for AI governance, the risks of "ungoverned access," and why "trust but verify" remains the golden rule when integrating LLMs into security workflows. About Sydney Marrone Sydney Marrone is the Head of Threat Hunting at Nebulock and a co-founder of the THOR Collective. With over a decade of experience in incident response, forensics, and blue teaming, she has become a leading voice in structured threat hunting. Sydney is the author of the Agentic Threat Hunting Framework (ATHF) and the co-author of the PEAK Threat Hunting Framework, which won a SANS award for its contribution to the community. A respected author and educator, Sydney co-authored The Threat Hunter's Cookbook and is currently developing a SANS course focused on threat hunting. Her work focuses on moving organizations from reactive to proactive security postures through advanced data science, automation, and authentic AI integration. Episode Links Nebulock (AI-Powered Threat Hunting): https://nebulock.io/ Agentic Threat Hunting Framework (ATHF): https://github.com/Nebulock-Inc/agentic-threat-hunting-framework THOR Collective (Substack & Community): https://dispatch.thorcollective.com/ PEAK Threat Hunting Framework: https://www.splunk.com/en_us/blog/security/peak-threat-hunting-framework.html HEARTH Repository (THOR Collective): https://github.com/THORCollective/HEARTH Threat Hunting MCP Server: https://github.com/THORCollective/threat-hunting-mcp-server

    39 min

About

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry. Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.