Privacy Please

A Problem Lounge Show

Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand. In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.

  1. 8H AGO

    S7, E271 - One File to Rule Them All

    Send us Fan Mail In this episode of Privacy Please, Cameron Ivey investigates Palantir Technologies — a data analytics company founded in 2003 with CIA backing that has quietly become embedded across nearly every major arm of the U.S. federal government. This week's investigation covers: The USDA Deal On April 22nd, the Department of Agriculture signed a $300 million blanket purchase agreement with Palantir to build "One Farmer, One File" — a unified digital profile for every American farmer. The deal was awarded without competitive bidding. The IRS Bombshell The same week, The Intercept revealed — based on documents obtained by watchdog group American Oversight — that Palantir has been running financial crime surveillance operations inside the IRS since 2018. The IRS has paid Palantir over $130 million for access to a platform that cross-references bank records, tax filings, transaction histories, and more across millions of Americans. The Immigration Enforcement Machine Palantir's ICE contracts — now over $145 million — power the agency's case management, deportation targeting, and real-time location tracking of immigrants. A tool called ELITE creates individual dossiers on deportation targets by pulling data from the Department of Health and Human Services. The Pushback That's Working New York City's public hospital network canceled its Palantir contract after community organizing and City Council pressure. In the UK, 229,000 people have signed petitions to remove Palantir from the National Health Service. Public pressure is moving the needle. Five Things You Can Do Right Now Cameron closes with specific, actionable steps every listener can take — from requesting your IRS transcript to freezing your credit to contacting your representative about sole-source contracting. Privacy Please is part of the Problem Lounge Network. New episodes weekly. theproblemlounge.com Chapter Markers  00:00 — Cold Open01:30 — Intro & Show Welcome02:45 — Act One: The USDA Deal06:00 — Act Two: Who Is Palantir?11:30 — Act Three: The Empire Expands (ICE, Policing)17:00 — Act Four: Your Tax Returns Are In There Too24:00 — Act Five: The Layer Nobody's Talking About30:00 — Act Six: The Part That Gives Me Hope34:30 — What You Can Actually Do (5 Tips)39:00 — Closing Reflection (Adjust timestamps after editing)Support the show

    22 min
  2. APR 20

    S7, E270 - The 40-Minute Hack That Stole the Blueprint for AI | The Mercor Breach

    Send us Fan Mail A normal data breach steals names and passwords. This one may have stolen the recipe for building the world’s most powerful AI models, and it happened through software most people will never notice until it breaks. We follow the Mercor breach from the first warning signs to the moment poisoned Python packages hit PyPI and spread in minutes across systems that were set to auto-update.   We walk through what Mercor actually does in the AI economy, especially RLHF (Reinforcement Learning from Human Feedback), and why that behind-the-scenes work shapes how tools from OpenAI, Anthropic, Meta, and Google behave. Then we unpack Lite LLM, the open source “plumbing” that connects apps to multiple AI services, and how a supply chain attack can bypass the company you’re targeting by compromising the dependencies everyone trusts.  From there, the focus shifts to the fallout: contractors whose Social Security numbers and identity documents may be exposed, companies scrambling to assess backdoors and credential theft, and the bigger fear that proprietary AI training data sets and labeling strategies are being auctioned on the dark web. We also dig into the compliance controversy around SOC2 and ISO 27001 style certifications and what happens when security audits become performance instead of protection.  If you care about cybersecurity, data privacy, AI governance, and open source risk, listen through to the end for concrete steps you can take right now. Subscribe, share this with a friend who uses AI tools, and leave a review with your take on who should be held accountable. Support the show

    13 min
  3. FEB 12

    S7, E265 - Don’t Trust, Verify: Even Your Update Button Might Be Lying

    Send us Fan Mail Autonomy sounds like progress until the system turns your choices against you. We dive into how AI agents change the risk equation, why “don’t trust, verify” now beats “trust but verify,” and what to do when the update button itself becomes the attack vector. We start with the Ivy League leak tied to Harvard and UPenn, where attackers exposed admissions hold notes that map influence rather than credit cards. That context turns routine records into leverage for extortion, social pressure, and geopolitical targeting. From there, we trace the surge of agentic AI in the workplace as employees paste code, legal docs, and sensitive files into chat interfaces. The real accelerant is MCP, the model context protocol that standardizes connections across Google Drive, Slack, databases, and more. Like USB for AI, MCP makes integration simple and powerful, but a single prompt injection can pivot across everything the agent can reach. Security gets messier with supply chain compromise. A China‑nexus campaign allegedly hijacked the Notepad++ update mechanism, handing a bespoke backdoor to developers who did the right thing. We unpack how to keep patching while reducing risk: signed updates, independent checksum checks, tight egress policies for updaters, and strong monitoring around update flows. On the policy front, Rhode Island’s vendor transparency rule forces companies to name who buys data. It is a nutrition label for privacy, and it lets users and watchdogs finally connect the dots between friendly interfaces and aggressive brokers. We close with concrete defenses that raise the floor. Move high‑value accounts to FIDO2 hardware keys or platform passkeys to block phishing at the protocol level. Scope agent permissions narrowly, isolate MCP connectors by function, and require explicit approvals for sensitive actions. Log everything an agent touches and review those trails. Autonomy should be earned, minimal, and observable. If AI is going to act on your behalf, it must prove itself at every step. If this conversation helps you think differently about agents, influence mapping, and how to lock down your stack, subscribe, share with a teammate, and leave a quick review telling us the one control you plan to implement this week. Support the show

    26 min
4.7
out of 5
30 Ratings

About

Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand. In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.

You Might Also Like