Modern Cyber with Jeremy Snyder

Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry. Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.

  1. Ben Wilcox of ProArch

    4D AGO

    Ben Wilcox of ProArch

    In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI. Ben shares his perspective as a long-time practitioner in the Microsoft ecosystem, emphasizing that the security stack must evolve with each major technology shift—from on-prem to cloud to AI. The conversation focuses on how to help customers achieve "data readiness" for AI adoption, particularly stressing that organizational discipline (like good compliance) is the fastest path to realizing AI's ROI. Ben reveals that the biggest concern he hears from enterprise customers is not LLM hallucinations or bias, but the risk of a major data breach via new AI services. He explains how ProArch leverages the comprehensive Microsoft security platform to provide centralized security and identity control across data, devices, and AI agents, ensuring that user access and data governance (Purview) trickle down through the entire stack. Finally, Ben discusses the inherent friction of his dual CISO/CTO role, explaining his philosophy of balancing rapid feature deployment with risk management by defining a secure "MVP" baseline and incrementally layering on controls as product maturity and risk increase. About Ben Wilcox Ben Wilcox is the Chief Technology Officer and Chief Information Security Officer at ProArch, where he leads global strategy for cloud modernization, cybersecurity, and AI enablement. With over two decades of experience architecting secure digital transformations, Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He’s recently guided Fortune 500 clients through AI adoption and zero-trust initiatives, ensuring that security evolves in step with rapid technological change. Episode Links https://www.proarch.com/ https://www.linkedin.com/in/ben-wilcox/ https://ignite.microsoft.com/en-US/home

    39 min
  2. This Week in AI Security - 13th November 2025

    5D AGO

    This Week in AI Security - 13th November 2025

    In this week's episode, Jeremy covers seven significant stories and academic findings that reveal the escalating risks and new attack methods targeting Large Language Models (LLMs) and the broader AI ecosystem. Key stories include: Prompt Flux Malware: Google Threat Intelligence Group (GTAG) discovered a new malware family called Prompt Flux that uses the Google Gemini API to continuously rewrite and modify its own behavior to evade detection—a major evolution in malware capabilities.ChatGPT Leak: User interactions and conversations with ChatGPT have been observed leaking into Google Analytics and the Google Search Console on third-party websites, potentially exposing the context of user queries.Traffic Analysis Leaks: New research demonstrates that observers can deduce the topics of a conversation in an LLM chatbot with high accuracy simply by analyzing the size and frequency of encrypted network packets (token volume), even without decrypting the data.Secret Sprawl: An analysis by Wiz found that several of the world's largest AI companies are leaking secrets and credentials in their public GitHub repositories, underscoring that the speed of AI development is leading to basic, repeatable security mistakes.Non-Deterministic LLMs: Research from Anthropic highlights that LLMs are non-deterministic and highly unreliable in describing their own internal reasoning processes, giving inconsistent responses even to minor prompt variations.The New AI VSS: The OWASp Foundation unveiled the AI Vulnerability Scoring System (AI VSS), a new framework to consistently classify and quantify the severity (on a 0-10 scale) of risks like prompt injection in LLMs, helping organizations make better risk-informed decisions.Episode Links: https://cybersecuritynews.com/promptflux-malware-using-gemini-api/ https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html https://arstechnica.com/ai/2025/11/llms-show-a-highly-unreliable-capacity-to-describe-their-own-internal-processes/ https://futurism.com/artificial-intelligence/llm-robot-vacuum-existential-crisis https://www.scworld.com/resource/owasp-global-appsec-new-ai-vulnerability-scoring-system-unveiled https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ https://www.securityweek.com/many-forbes-ai-50-companies-leak-secrets-on-github/

    16 min
  3. This Week in AI Security - 6th November 2025

    NOV 6

    This Week in AI Security - 6th November 2025

    In this week's episode, Jeremy looks at three compelling stories and a significant academic paper that illustrate the accelerating convergence of AI, APIs, and network security. API Exposure in AI Services: We discuss a path traversal vulnerability that led to the discovery of 3,000 API keys in a managed AI hosting service, underscoring that the API remains the exposed attack surface where data exfiltration occurs. AI Code Agent Traffic Analysis: Drawing on research from Chaser Systems, Jeremy breaks down the network traffic from popular AI coding agents (like Copilot and Cursor). The analysis reveals that sensitive data, including previous conversation context and PII, is repeatedly packaged and resent with every subsequent request, making detection and leakage risk significantly higher. LLM-Powered Malware: We cover a groundbreaking discovery by the Microsoft Incident Response Team (DART): malware using the OpenAI Assistants API as its Command and Control (C2) server. This new category of malware replaces traditional hard-coded instructions with an LLM-driven "brain," giving it the potential to coordinate malicious activity with context, creativity, and adaptability. The Guardrail Fallacy: Finally, Jeremy discusses an academic paper showing that strong, adaptive attacks can bypass LLM defenses against Jailbreaks and Prompt Injections with an Attack Success Rate (ASR) of over 90%. The research argues that simple guardrails provide organizations with a dangerous false sense of security. Episode Links https://chasersystems.com/blog/what-data-do-coding-agents-send-and-where-to/ https://embracethered.com/blog/posts/2025/claude-abusing-network-access-and-anthropic-api-for-data-exfiltration/ https://arxiv.org/pdf/2510.09023 https://www.microsoft.com/en-us/security/blog/2025/11/03/sesameop-novel-backdoor-uses-openai-assistants-api-for-command-and-control/ ------ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    16 min
  4. This Week in AI Security - 30th October 2025

    OCT 30

    This Week in AI Security - 30th October 2025

    In this week's episode, Jeremy focuses on two rapidly evolving areas of AI security: the APIs that empower AI services and the risks emerging from new AI Browsers. We analyze two stories highlighting the exposure of secrets and sensitive data: API Insecurity: A path traversal vulnerability was discovered in the APIs powering an MCP server hosting service, leading to the exposure of 3,000 API keys. This reinforces the lesson that foundational security mistakes, such as inadequate secret management and unpatched vulnerabilities, are being repeated in the rush to launch new AI services. CVE in Google Cloud Vertex AI: We discuss a confirmed CVE in Google's Vertex AI service APIs. This vulnerability briefly allowed requests made by one customer's application to be routed and responded to another customer's account, risking exposure of sensitive corporate data and intellectual property in a multi-tenant SaaS environment. Finally, we explore the risks of AI Browsers (like the ChatGPT Atlas or Perplexity Comet browser) and AI Sidebars. These agents, designed to act with agency on a user's behalf (e.g., price comparison), are vulnerable to techniques that can reveal sensitive PII and user credentials to malicious websites, or unwittingly download malware. Episode Links https://blog.gitguardian.com/breaking-mcp-server-hosting/https://cloud.google.com/support/bulletins#gcp-2025-059 https://fortune.com/2025/10/23/cybersecurity-vulnerabilities-openai-chatgpt-atlas-ai-browser-leak-user-data-malware-prompt-injection/ https://securityboulevard.com/2025/10/news-alert-squarex-reveals-new-browser-threat-ai-sidebars-cloned-to-exploit-user-trust/ https://techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/ ____________ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of FireTail's AI Security & Governance Platform

    11 min
  5. This Week in AI Security - 23rd October 2025

    OCT 23

    This Week in AI Security - 23rd October 2025

    In this week's episode, recorded live from the inaugural AI Security Summit hosted by Snyk, Jeremy reports on the latest threats and strategic discussions shaping the industry. Covering multiple instances of "old risks" reappearing in new AI contexts... The Salesforce "forced leak" vulnerability, where an AI agent was exposed to malicious prompt injection via seemingly innocuous text fields on web forms (a failure of input sanitization). Research from Nvidia detailing waterhole attacks where malicious code (e.g., PowerShell) is hidden in decoy libraries (like "react-debug") that AI coding assistants might suggest to developers. A consumer AI girlfriend app that exposed customer chat data by storing conversations in an open Apache Kafka pipeline, demonstrating a basic failure of security hygiene under the pressure of rapid AI development. The "Glass Worm" campaign, where invisible Unicode control characters (similar to Ascii Smuggling research by Firetail) were used to embed malware in a VS Code plugin, proving the invisible code risk is actively being leveraged in development tools. Finally, Jeremy shares strategic insights from the summit, including the massive projected growth of the AI market (approaching the size of cloud computing), the urgency of data readiness and governance to prevent model poisoning, and the futurist perspective that AI's accelerated skill acquisition (potentially surpassing humans in certain tasks in an 18-month cycle) will require human workers to constantly upskill and change roles more frequently. Episode Links https://noma.security/blog/forcedleak-agent-risks-exposed-in-salesforce-agentforce/ https://www.koi.ai/blog/glassworm-first-self-propagating-worm-using-invisible-code-hits-openvsx-marketplace https://developer.nvidia.com/blog/from-assistant-to-adversary-exploiting-agentic-ai-developer-tools/ https://www.foxnews.com/tech/ai-girlfriend-apps-leak-millions-private-chats https://layerxsecurity.com/blog/cometjacking-how-one-click-can-turn-perplexitys-comet-ai-browser-against-you/ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform

    19 min
  6. Chris Farris of fwd:cloudsec

    OCT 21

    Chris Farris of fwd:cloudsec

    In this special in-person episode of Modern Cyber, recorded at fwd:cloudsec Europe, Jeremy is joined by cloud security expert and conference organizer Chris Farris. Drawing on his over 30 years in IT, Chris recounts his journey into cloud security, from his early days with Linux to moving video archives to AWS S3. The conversation revisits the foundational mindset shifts that occurred with the rise of the cloud, focusing on the agility it brought and the security gaps it created, such as the transition from rigid, on-premises governance to the chaotic freedom of API calls and ClickOps. The core of the episode explores the concept of the Sovereign Cloud, specifically Amazon's intended European Sovereign Cloud. Chris clarifies that simple data residency is not true sovereignty due to the US Cloud Act. He details the unique nature of the European partition—a completely separate partition, billing system, and support staff operated only by EU citizens—and identifies the primary flaw: the lack of a legal statute protecting the European employees from being compelled to act under the Cloud Act. Finally, Chris shares a powerful reflection on the fwd:cloudsec community, calling it a "second cloud family". Guest Bio Chris Farris is a highly experienced IT professional with a career spanning over 25 years. During this time, he has focused on various areas, including Linux, networking, and security. For the past eight years, he has been deeply involved in public-cloud and public-cloud security in media and entertainment, leveraging his expertise to build and evolve multiple cloud security programs. Chris is passionate about enabling the broader security team’s objectives of secure design, incident response, and vulnerability management. He has developed cloud security standards and baselines to provide risk-based guidance to development and operations teams. As a practitioner, he has architected and implemented numerous serverless and traditional cloud applications, focusing on deployment, security, operations, and financial modeling. He is one of the organizers of the fwd:cloudsec conference and presented at various AWS conferences and BSides events. He was named one of the inaugural AWS Security Heroes. Chris shares his insights on security and technology on social media platforms like BlueSky, Mastodon and his website chrisfarris.com. Episode Links‍ https://fwdcloudsec.org https://fwdcloudsec.org/forum/ https://www.chrisfarris.com Discover all of your Shadow AI now Worried about AI security? Get Complete AI Visibility in 15 Minutes. Book a demo of Firetail's AI Security & Governance Platform.

    51 min
  7. This Week in AI Security - 16 October 2025

    OCT 21

    This Week in AI Security - 16 October 2025

    In this week's episode of This Week in AI Security, Jeremy covers four key developments shaping the AI security landscape. Jeremy begins by analyzing a GitHub Copilot flaw that exposed an LLM vulnerability similar to the one Jeremy disclosed last week. Researchers were able to use a hidden code comment feature to smuggle malicious prompts into the LLM, allowing them to potentially exfiltrate secrets and source code from private repositories. This highlights a growing risk in how LLMs process different input formats. Next, we discuss a fascinating research paper demonstrating the effectiveness of data poisoning. The study found that corrupting a model's behavior was possible with as few as 250 malicious documents—even in models with large training sets. By embedding a malicious command that mimicked sudo, researchers could implement a backdoor that sends data out, proving that the Attack Success Rate (ASR) is a critical metric for this real-world threat. We then examine a story at the intersection of agentic AI and supply chain risk, where untrusted actors exploited vulnerabilities in AI development plugins. By intercepting system prompts that lacked proper encryption, an attacker could discover the agent's permissions and potentially exfiltrate sensitive data, including Windows NTLM credentials. Finally, we look at the latest State of AI report, which provides further confirmation that LLMs like Claude are being used by malicious actors—specifically suspected North Korean state actors—to "vibe hack" the hiring process. By using AI to create perfect-looking resumes and tailored interview responses, the traditional method of spotting phony candidates by poor text quality is no longer reliable. Episode Links: https://www.securityweek.com/github-copilot-chat-flaw-leaked-data-from-private-repositories/https://www.anthropic.com/research/small-samples-poisonhttps://versprite.com/blog/watch-who-you-open-your-door-to-in-ai-times/https://excitech.substack.com/p/16-highlights-from-the-state-of-aihttps://www.stateof.ai/https://www.firetail.ai/blog/we-interviewed-north-korean-hacker-heres-what-learnedDiscover all of your Shadow AI now... Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform.

    9 min
  8. This Week in AI Security - 9th Oct 2025

    OCT 9

    This Week in AI Security - 9th Oct 2025

    In this very first episode of 'This Week in AI Security', brought to you by the Firetail team, Jeremy dives into three crucial stories from the past week that highlight the rapidly evolving security landscape of AI adoption. We start with a classic error: a contractor for the Australian State of New South Wales repeated the "open S3 bucket" mistake by uploading a sensitive data set to a generative AI platform, confirming that old security missteps are resurfacing with new technology. Next, we look at a win for the defense: how Microsoft's AI analysis tools blocked a sophisticated phishing campaign that used AI-generated malicious code embedded in an SVG file and was sent from a compromised small business—a clear proof that AI can be very useful on the defensive side. Finally, we discuss recent research from the Firetail team uncovering an ASCII Smuggling vulnerability in Google Gemini, Grok, and other LLMs. This technique uses hidden characters to smuggle malicious instructions into benign-looking prompts (e.g., in emails or calendar invites). We detail the surprising dismissal of this finding by Google, which highlights the urgent need to address common, yet serious, social engineering risks in the new age of LLMs. Show links: https://databreaches.net/2025/10/06/nsw-gov-contractor-uploaded-excel-spreadsheet-of-flood-victims-data-to-chatgpt/ https://www.infosecurity-magazine.com/news/ai-generated-code-phishing/ https://www.firetail.ai/blog/ghosts-in-the-machine-ascii-smuggling-across-various-llms https://thehackernews.com/2025/09/researchers-disclose-google-gemini-ai.html ________ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    8 min

About

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry. Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.