Modern Cyber with Jeremy Snyder

Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry. Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.

  1. This Week in AI Security - 26th February 2026

    3D AGO

    This Week in AI Security - 26th February 2026

    In this episode of This Week in AI Security for February 26, 2026, Jeremy covers another packed week featuring AI privacy boundary failures, agent-driven outages, AI-accelerated cybercrime, Android malware innovation, platform responsibility debates, and the continued risks of vibe-coded applications. Key Stories & Developments: Microsoft Copilot Confidential Email Bug: Microsoft Copilot was found summarizing confidential emails due to a flaw in the Copilot Chat “Work” tab. AI Agent Triggers AWS Bedrock Outage: An outage involving Amazon Bedrock exposed the risks of agentic coding systems with broad permissions.AI-Powered Assembly Line for Cybercrime: A Russian-speaking attacker breached FortiGate firewalls across 55 countries in just five weeks using AI as a force multiplier.PromptSpy: Android Malware Using Live LLM Command & Control: PromptSpy became the first known Android malware to dynamically leverage Google Gemini at runtime. Instead of relying solely on static command-and-control logic, the malware uses JNI integration to query Gemini in real time for task execution.ChatGPT, Mental Health, and Law Enforcement Boundaries: Following a shooting incident in Tumbler Ridge, Canada, investigators discovered significant usage of ChatGPT by the suspect prior to the event. Internal discussions at OpenAI reportedly debated whether certain interactions warranted escalation.LLM-Generated Passwords Lack Entropy: Security researchers highlighted that passwords generated by LLMs exhibit approximately 80% less entropy than those created by traditional password generators.Vibe-Coded Security Suite Exposes Master Keys: A Reddit thread revealed that a suite of “RR”-branded tools were entirely vibe-coded applications with severe security flaws. Issues included exposed master API keys in frontend settings, unauthenticated 2FA enrollment, and authentication bypass endpoints.Anthropic Moves from Detection to Remediation: Anthropic introduced tooling aimed at moving beyond passive source-code analysis toward automated remediation of vulnerabilities.Episode Links https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/ https://www.thestandard.com.hk/tech-and-startup/article/324872/Amazons-cloud-unit-hit-was-hit-by-least-two-outages-involving-AI-tools-in-December-FT-says https://www.reuters.com/business/retail-consumer/amazons-cloud-unit-hit-by-least-two-outages-involving-ai-tools-ft-says-2026-02-20/ https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/ https://cyberandramen.net/2026/02/21/llms-in-the-kill-chain-inside-a-custom-mcp-targeting-fortigate-devices-across-continents/ https://www.bleepingcomputer.com/news/security/promptspy-is-the-first-known-android-malware-to-use-generative-ai-at-runtime/ https://techcrunch.com/2026/02/21/openai-debated-calling-police-about-suspected-canadian-shooters-chats/ https://www.techradar.com/pro/security/dont-trust-ai-to-come-up-with-a-new-strong-password-for-you-llms-are-pretty-poor-at-creating-new-logins-experts-warn https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_stacks/ https://www.anthropic.com/news/claude-code-security

    15 min
  2. This Week in AI Security - 19th February 2026

    FEB 19

    This Week in AI Security - 19th February 2026

    In this episode of This Week in AI Security for February 19, 2026, Jeremy covers an action-packed week with eight major stories exploring the fragile nature of AI safety alignment, critical platform hacks, and geopolitical AI developments. Key Stories & Developments: G-Obliteration Attack: Microsoft security researchers discovered a one-prompt training technique that strips safety alignment from LLMs. By leveraging Group Relative Policy Optimization (GRPO), attackers can use a single mild prompt to cause cross-category generalization of harm. This effectively removes guardrails across 15 open-source models while preserving their utility.Orchids Vibe-Coding Hack: A BBC reporter was hacked on Orchids, a popular "vibe-coding" platform. A security researcher demonstrated a malicious code injection that compromised the user's development environment.AI vs. Legacy Email Security: AI-powered cyberattacks are successfully bypassing 88% of legacy email security systems. Attackers are utilizing LLMs to generate highly authentic phishing and impersonation content at scale.AI Doctors Evade Privacy Rules: AI-powered health services are not subject to the same strict privacy regulations as traditional healthcare facilities. This raises concerns around data leaks and medical hallucinations.OpenClaw Info Stealer: A variant of the Vidar info-stealer is targeting the OpenClaw ecosystem. The attack aims to exfiltrate configuration files and gateway authentication tokens.OpenClaw Founder Joins OpenAI: Peter Steinberger, the creator of the OpenClaw framework, has joined OpenAI. The OpenClaw project will transition to an open-source foundation supported by OpenAI.Claude's Geopolitical Role: Reports indicate that Anthropic's Claude was utilized via the Palantir platform during a US military raid in Venezuela. This raid led to the capture of Nicolas Maduro.ASIS AI Safety Report 2026: The International AI Safety Report highlights three emerging risks. These include the lowered barrier for biological weapons, the surge in deepfakes and fraud, and the difficulty of safety research.Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo Episode Links https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/ https://www.bbc.com/news/articles/cy4wnw04e8wo https://www.cpapracticeadvisor.com/2026/02/09/study-ai-powered-cyber-attacks-hit-88-of-legacy-email-security-systems/177694/ https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/ https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/ https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/

    13 min
  3. This Week in AI Security - 12th February 2026

    FEB 12

    This Week in AI Security - 12th February 2026

    In this episode of This Week in AI Security, Jeremy covers a concise but critical set of stories for the week of February 12, 2026. From physical world prompt injections targeting autonomous vehicles to massive data leaks in consumer AI wrappers, the intersection of AI and infrastructure remains the primary battleground. Key Stories & Developments: Prompt Injecting Autonomous Vehicles: Researchers at UCSC and Johns Hopkins have demonstrated that autonomous cars and drones can be compromised by "visual" prompt injections placed on physical signs, causing them to ignore traffic rules or misinterpret their surroundings.Massive Chat App Leak: The "Chat & Ask AI" wrapper application exposed 300 million messages belonging to 25 million users due to a simple Firebase misconfiguration that allowed unauthenticated access to read, modify, and delete data.Docker AI Metadata Attacks: A new vulnerability in Docker's AI assistant allows attackers to trigger exploits by planting malicious instructions within container image metadata.Claude Opus 4.6 vs. Security: Anthropic's latest model, Claude Opus 4.6, has demonstrated a frightening new capability: finding high-severity vulnerabilities and logic bugs via reasoning (rather than fuzzing) without needing specialized prompting or scaffolding. Worried about OpenClaw on your network? The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach. Scan Your Network for Shadow Agents Now https://www.firetail.ai/schedule-your-demo Episode Links https://www.theregister.com/2026/01/30/road_sign_hijack_ai/ https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users https://www.govinfosecurity.com/docker-ai-bug-lets-image-metadata-trigger-attacks-a-30709 https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting https://red.anthropic.com/2026/zero-days/

    13 min
  4. This Week in AI Security - 5th February 2026

    FEB 5

    This Week in AI Security - 5th February 2026

    In this first episode of February 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent. Key Stories & Developments: Operation Bizarre Bazaar: Threat actors are actively targeting exposed LLM infrastructure to steal computing resources for cryptocurrency mining and resell API access on dark markets, attempting to pivot into internal systems via compromised MCP servers.Gemini MCP Tool Exploit: A critical Remote Code Execution (RCE) vulnerability was identified in a Gemini Model Context Protocol (MCP) tool, highlighting the recurring theme that the infrastructure powering LLMs remains a primary weak point.MoltBook API Leak: Researchers discovered a hardcoded Supabase API key in "MoltBook," a social network for AI agents. This flaw granted unauthenticated access to the entire production database, exposing over 1.5 million API keys.Bondu AI Toy Breach: A privacy failure in an AI-powered dinosaur toy left 50,000 chat log records exposed to anyone with a Gmail account, underscoring the lack of robust authentication in consumer AI IoT devices.CISA Chief's Data Mishandling: Reports surfaced that the acting head of the country's cyber defense agency uploaded sensitive "official use only" documents into a public version of ChatGPT, bypassing enterprise controls and security protocols. Worried about OpenClaw on your network? The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach. Scan Your Network for Shadow Agents Now https://www.firetail.ai/schedule-your-demo Episode Links https://www.bleepingcomputer.com/news/security/hackers-hijack-exposed-llm-endpoints-in-bizarre-bazaar-operation/ https://darkwebinformer.com/cve-2026-0755-reported-zero-day-in-gemini-mcp-tool-could-allow-remote-code-execution/ https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys https://ai.plainenglish.io/clawdbot-security-guide-de77b45ab719 https://blackoutvpn.au/blog/dont-buy-internet-connected-toys https://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361

    12 min
  5. This Week in AI Security - 29th January 2026

    JAN 29

    This Week in AI Security - 29th January 2026

    In this final episode of January 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent. Key Stories & Developments: Chainlit Framework Flaws: Two critical CVEs were identified in Chainlit, a popular Python package for building enterprise chatbots. These vulnerabilities, including Arbitrary File Read and Server-Side Request Forgery (SSRF), highlight the supply chain risks inherent in the rapidly growing AI development ecosystem.Google Gemini Workspace Exploit: Researchers demonstrated how Gemini can be manipulated via malicious calendar invites. By embedding hidden instructions (similar to Ascii or emoji smuggling), attackers can trick the AI into exfiltrating sensitive user data, such as meeting details and attachments.VS Code "Spyware" Plugins: Over 1.5 million developers were potentially exposed to malicious VS Code extensions impersonating ChatGPT. These plugins serve as "watering hole" attacks designed to harvest sensitive environment variables, credentials, and deployment keys.Vertex AI Privilege Escalation: A novel attack chain in Google’s Vertex AI was disclosed. Attackers used a malicious reverse shell in a reasoning engine function to escalate privileges via the Instance Metadata Service, gaining master access to chat sessions, storage buckets, and logs.The "Cloudbot" Warning: A deep dive into Cloudbot (now rebranded as ClawdBot), a general-purpose AI agent. Researchers found hundreds of instances sitting wide open on the internet, many providing full root shell access and exposing personal conversation histories and API keys.Episode Links https://www.theregister.com/2026/01/20/ai_framework_flaws_enterprise_clouds/https://www.securityweek.com/weaponized-invite-enabled-calendar-data-theft-via-google-gemini/https://cybernews.com/security/fake-chatgpt-vscode-extensions-compromised-developers/https://gbhackers.com/google-vertex-ai-flaw/https://www.insurancejournal.com/magazines/mag-features/2026/01/26/855293.htmhttps://arxiv.org/pdf/2601.10338https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/https://securityboulevard.com/2026/01/clawdbot-is-what-happens-when-ai-gets-root-access-a-security-experts-take-on-silicon-valleys-hottest-ai-agent/https://jpcaparas.medium.com/hundreds-of-clawdbot-instances-were-exposed-on-the-internet-heres-how-to-not-be-one-of-them-63fa813e6625https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    24 min
  6. Sydney Marrone of Nebulock

    JAN 28

    Sydney Marrone of Nebulock

    In this episode of Modern Cyber, Jeremy is joined by Sydney Marrone, a premier expert in the field of threat hunting and the Head of Threat Hunting at Nebulock. The conversation explores the rapidly evolving intersection of threat hunting and artificial intelligence, specifically focusing on how AI agents are transforming the speed and efficacy of defensive operations. Sydney shares her journey from "crawling under desks" in IT to building elite threat hunting teams at major organizations like Lumen (formerly CenturyLink) and Splunk. She breaks down her newly released Agentic Threat Hunting Framework (ATHF) and the LOCK pattern (Learn, Observe, Check, Keep), explaining how AI can condense a hunt that previously took four weeks into a mere 45 minutes. They also discuss the critical need for AI governance, the risks of "ungoverned access," and why "trust but verify" remains the golden rule when integrating LLMs into security workflows. About Sydney Marrone Sydney Marrone is the Head of Threat Hunting at Nebulock and a co-founder of the THOR Collective. With over a decade of experience in incident response, forensics, and blue teaming, she has become a leading voice in structured threat hunting. Sydney is the author of the Agentic Threat Hunting Framework (ATHF) and the co-author of the PEAK Threat Hunting Framework, which won a SANS award for its contribution to the community. A respected author and educator, Sydney co-authored The Threat Hunter's Cookbook and is currently developing a SANS course focused on threat hunting. Her work focuses on moving organizations from reactive to proactive security postures through advanced data science, automation, and authentic AI integration. Episode Links Nebulock (AI-Powered Threat Hunting): https://nebulock.io/ Agentic Threat Hunting Framework (ATHF): https://github.com/Nebulock-Inc/agentic-threat-hunting-framework THOR Collective (Substack & Community): https://dispatch.thorcollective.com/ PEAK Threat Hunting Framework: https://www.splunk.com/en_us/blog/security/peak-threat-hunting-framework.html HEARTH Repository (THOR Collective): https://github.com/THORCollective/HEARTH Threat Hunting MCP Server: https://github.com/THORCollective/threat-hunting-mcp-server

    39 min
  7. This Week in AI Security - 22nd January 2026

    JAN 23

    This Week in AI Security - 22nd January 2026

    In this episode of This Week in AI Security, Jeremy highlights a significant uptick in AI-related vulnerabilities and the shifting regulatory landscape. The episode covers everything from "Body Snatcher" flaws in enterprise platforms to the growing "industrialization" of AI-powered exploit generation. Key Stories & Developments: California's Cease and Desist to XAI: Following international concerns over sexualized deepfakes, California has issued a first-of-its-kind cease and desist order to XAI. This marks a major moment in regional AI oversight in the absence of federal legislation.ServiceNow "Body Snatcher" Flaw: A critical 9.3/10 CVE was identified in ServiceNow’s AI agent service. An unauthenticated endpoint allowed for Remote Code Execution (RCE), demonstrating that unauthenticated APIs remain a massive risk for agentic systems.Anthropic "Magic String" Crash: Researchers discovered a specific "magic string" that can effectively crash Anthropic LLM sessions. This specialized prompt acts as a denial-of-service against agentic workflows by killing the active interaction stream.Claude Code Data Leak: A default logging feature in Claude Code (vibe coding) saves full-text chat histories in a local directory. Developers committing this directory to public repos risk exposing their entire application logic and internal prompts to attackers.Eurostar Chatbot Exploit: A public-facing AI chatbot for Eurostar was found vulnerable to guardrail bypass and prompt injection. Ross Donald discovered that simply hardcoding a "validation" parameter in the API allowed him to bypass front-end checks.Industrialized Exploit Generation: A new study suggests that for a mere $30 token budget, an LLM can successfully generate an exploit for a known software vulnerability, potentially reducing the "time-to-exploit" to under 20 minutes. Episode Links https://thehackernews.com/2026/01/servicenow-patches-critical-ai-platform.htmlhttps://appomni.com/ao-labs/bodysnatcher-agentic-ai-security-vulnerability-in-servicenow/https://cy.md/opencode-rce/https://techcrunch.com/2026/01/16/california-ag-sends-musks-xai-a-cease-and-desist-order-over-sexual-deepfakes/https://mastodon.social/@Viss/115923109466960526https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/https://bsky.app/profile/aparker.io/post/3mcqehqhcgc2q Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    16 min
  8. This Week in AI Security - 15th January 2026

    JAN 15

    This Week in AI Security - 15th January 2026

    Happy New Year! Jeremy kicks off 2026 with a special extended episode to catch up on everything that happened while the industry was on holiday. From humanoid robots to new global protocols for "Agentic Commerce," AI adoption is accelerating at an unprecedented pace. Market & Strategic Trends: Explosive Growth: AI consumption has tripled over the last year, with user prompt volume growing 6x.Specialized Foundations: We are seeing a shift from general-purpose models to domain-specific LLMs, such as Nvidia's Alpamayo for autonomous vehicles.Agentic Commerce: Google has announced a new protocol designed to facilitate interactions between AI shopping agents and retail systems.Regulatory Landscape: New York has introduced the RAISE Act for AI security, while Italy is challenging Meta's "walled garden" approach to AI chatbots on WhatsApp.Critical Vulnerabilities & Research: Prompt Injection is "Inherent": OpenAI researchers suggest that agentic browsers may be inherently vulnerable to indirect prompt injection due to their need to process external instructions.Supply Chain Risks: Major vulnerabilities were identified in LangChain (API serialization issues) and n8n (max severity RCE), both core tools for building AI workflows.Shadow AI Attacks: Over 91,000 attack sessions were detected targeting AI deployments, including Server-Side Request Forgery (SSRF) campaigns launched via Llama.Episode Links https://securityboulevard.com/2026/01/report-increase-usage-of-generative-ai-services-creates-cybersecurity-challenge/ https://techcrunch.com/2026/01/05/boston-dynamicss-next-gen-humanoid-robot-will-have-google-deepmind-dna/ https://techcrunch.com/2026/01/05/nvidia-launches-alpamayo-open-ai-models-that-allow-autonomous-vehicles-to-think-like-a-human/ https://techcrunch.com/2026/01/11/google-announces-a-new-protocol-to-facilitate-commerce-using-ai-agents/ https://techcrunch.com/2025/12/20/new-york-governor-kathy-hochul-signs-raise-act-to-regulate-ai-safety/ https://techcrunch.com/2025/12/24/italy-tells-meta-to-suspend-its-policy-that-bans-rival-ai-chatbots-from-whatsapp/https://github.com/asgeirtj/system_prompts_leaks/ https://techcrunch.com/2025/12/22/openai-says-ai-browsers-may-always-be-vulnerable-to-prompt-injection-attacks/ https://techcrunch.com/2026/01/04/french-and-malaysian-authorities-are-investigating-grok-for-generating-sexualized-deepfakes/ https://www.bleepingcomputer.com/news/security/max-severity-ni8mare-flaw-lets-hackers-hijack-n8n-servers/ https://aws.amazon.com/security/security-bulletins/rss/2026-001-aws/ https://securityboulevard.com/2026/01/google-gemini-ai-flaw-could-lead-to-gmail-compromise-phishing-2/ https://www.scworld.com/brief/severe-ask-gordon-ai-vulnerability-addressed-by-docker https://www.eweek.com/news/langchain-ai-vulnerability-exposes-apps-to-hack/ https://cybernews.com/security/dig-ai-new-cyber-weapon-abused-by-hackers/ https://cyberpress.org/hackers-actively-exploit-ai-deployments/

    21 min

About

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry. Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.