Privacy Please

A Problem Lounge Show

Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand. In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.

  1. 2D AGO

    S7, E270 - The 40-Minute Hack That Stole the Blueprint for AI | The Mercor Breach

    Send us Fan Mail A normal data breach steals names and passwords. This one may have stolen the recipe for building the world’s most powerful AI models, and it happened through software most people will never notice until it breaks. We follow the Mercor breach from the first warning signs to the moment poisoned Python packages hit PyPI and spread in minutes across systems that were set to auto-update.   We walk through what Mercor actually does in the AI economy, especially RLHF (Reinforcement Learning from Human Feedback), and why that behind-the-scenes work shapes how tools from OpenAI, Anthropic, Meta, and Google behave. Then we unpack Lite LLM, the open source “plumbing” that connects apps to multiple AI services, and how a supply chain attack can bypass the company you’re targeting by compromising the dependencies everyone trusts.  From there, the focus shifts to the fallout: contractors whose Social Security numbers and identity documents may be exposed, companies scrambling to assess backdoors and credential theft, and the bigger fear that proprietary AI training data sets and labeling strategies are being auctioned on the dark web. We also dig into the compliance controversy around SOC2 and ISO 27001 style certifications and what happens when security audits become performance instead of protection.  If you care about cybersecurity, data privacy, AI governance, and open source risk, listen through to the end for concrete steps you can take right now. Subscribe, share this with a friend who uses AI tools, and leave a review with your take on who should be held accountable. Support the show

    13 min
  2. FEB 12

    S7, E265 - Don’t Trust, Verify: Even Your Update Button Might Be Lying

    Send us Fan Mail Autonomy sounds like progress until the system turns your choices against you. We dive into how AI agents change the risk equation, why “don’t trust, verify” now beats “trust but verify,” and what to do when the update button itself becomes the attack vector. We start with the Ivy League leak tied to Harvard and UPenn, where attackers exposed admissions hold notes that map influence rather than credit cards. That context turns routine records into leverage for extortion, social pressure, and geopolitical targeting. From there, we trace the surge of agentic AI in the workplace as employees paste code, legal docs, and sensitive files into chat interfaces. The real accelerant is MCP, the model context protocol that standardizes connections across Google Drive, Slack, databases, and more. Like USB for AI, MCP makes integration simple and powerful, but a single prompt injection can pivot across everything the agent can reach. Security gets messier with supply chain compromise. A China‑nexus campaign allegedly hijacked the Notepad++ update mechanism, handing a bespoke backdoor to developers who did the right thing. We unpack how to keep patching while reducing risk: signed updates, independent checksum checks, tight egress policies for updaters, and strong monitoring around update flows. On the policy front, Rhode Island’s vendor transparency rule forces companies to name who buys data. It is a nutrition label for privacy, and it lets users and watchdogs finally connect the dots between friendly interfaces and aggressive brokers. We close with concrete defenses that raise the floor. Move high‑value accounts to FIDO2 hardware keys or platform passkeys to block phishing at the protocol level. Scope agent permissions narrowly, isolate MCP connectors by function, and require explicit approvals for sensitive actions. Log everything an agent touches and review those trails. Autonomy should be earned, minimal, and observable. If AI is going to act on your behalf, it must prove itself at every step. If this conversation helps you think differently about agents, influence mapping, and how to lock down your stack, subscribe, share with a teammate, and leave a quick review telling us the one control you plan to implement this week. Support the show

    26 min
4.7
out of 5
30 Ratings

About

Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand. In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.

You Might Also Like