Katharine Jarmul, Privacy in ML/AI Expert & Author of Practical Data Privacy, joins Hugo to unpack why most AI privacy advice is theater: and what technical privacy actually looks like when you’re shipping LLMs, agents, and multimodal systems into the real world. In this episode, we dig into how to build defensible systems in an era of AI agents and multimodal models: why system prompts (and your entire agent harness!) should be considered public by default, and why “privacy observability” is as critical as data observability for anyone building with LLMs today. Multimodal is what changes the threat model: identifiers hide in images, audio, and metadata, not just text, and the old anonymization playbook doesn’t cover it. Vanishing Gradients is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. We Discuss: * No Convenience Tax, you don’t have to trade privacy for utility: high-utility AI products can be privacy-preserving through technical controls like privacy routing and input sanitization; * Public Prompts and Harnesses: assume any instruction or secret in a system prompt or agent harness will be exfiltrated; don’t put sensitive info there in the first place; * Privacy Observability, tag and track data flows so information is used only for its original intended purpose: catch design flaws before they become legal problems; * Technical Privacy, implement mathematical and statistical constraints directly into ML systems and data flows so privacy is measurable and enforceable, not aspirational; * Tiered Guardrails, a three-layer approach: deterministic filters for hard rules, algorithmic models for nuanced classification, and internal alignment training for behavioral baselines; * Federated Learning Is Not Privacy, model updates in FL leak sensitive data on their own: you must layer differential privacy or encrypted computation on top, or you’re reverse-engineerable; * Anonymization Spectrum, navigate the “grayscale” of privacy in multimodal AI, balancing data utility and individual risk as identifiers hide in non-obvious places; * Privacy Champions, embed privacy accountability directly into development by training and incentivizing engineers inside product teams; * Red Teaming as Ritual, your goal is to attack yourself: practice thinking like an attacker, and turn privacy testing into an organization-wide creative ritual rather than a siloed security task. You can also find the full episode on Spotify, Apple Podcasts, and YouTube. You can also interact directly with the transcript here in NotebookLM: If you do so, let us know anything you find in the comments! 👉 Katharine is teaching her next cohort of Practical AI Privacy starting April 20. She’s kindly giving readers of Vanishing Gradients 10% off. Use this link. I’ll be taking it so hope to see you there!👈 Our flagship course Building AI Applications just wrapped its final cohort but we’re cooking up something new. If you want to be first to hear about it (and help shape what we build), drop your thoughts here. LINKS * Practical AI Privacy course on Maven (10% off with code build-with-privacy) * Katharine Jarmul on LinkedIn * Probably Private — Katharine’s website & newsletter * Practical Data Privacy (Katharine’s book) * Let’s Build an AI Privacy Router — Lightning Lesson * Practical AI Privacy: Agents & Local LLMs (newsletter issue) * A Deep Dive into Memorization in Deep Learning (kjamistan blog) * Microsoft Presidio * Llama Guard 3 8B on Hugging Face * Nicholas Carlini * From Magic to Malware: How OpenClaws Agent Skills Become an Attack Surface (1Password) * Owning Ethics (Metcalf, Moss, boyd — Data & Society) * Hugo on guardrails in LLM applications * Upcoming Events on Luma * Vanishing Gradients on YouTube * Watch the podcast video on YouTube How You Can Support Vanishing Gradients Vanishing Gradients is a podcast, workshop series, blog, and newsletter focused on what you can build with AI right now. Over 70 episodes with expert practitioners from Google DeepMind, Netflix, Stanford, and elsewhere. Hundreds of hours of free, hands-on workshops. All independent, all free. If you want to help keep it going: * Become a paid subscriber, from $8/month * Share this with a builder who’d find it useful * Subscribe to our YouTube channel. Thanks for reading Vanishing Gradients! This post is public so feel free to share it. Get full access to Vanishing Gradients at hugobowne.substack.com/subscribe