Privacy Please

The Problem Lounge

Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand. In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.

  1. 1D AGO

    S7, E265 - Don’t Trust, Verify: Even Your Update Button Might Be Lying

    Send a text Autonomy sounds like progress until the system turns your choices against you. We dive into how AI agents change the risk equation, why “don’t trust, verify” now beats “trust but verify,” and what to do when the update button itself becomes the attack vector. We start with the Ivy League leak tied to Harvard and UPenn, where attackers exposed admissions hold notes that map influence rather than credit cards. That context turns routine records into leverage for extortion, social pressure, and geopolitical targeting. From there, we trace the surge of agentic AI in the workplace as employees paste code, legal docs, and sensitive files into chat interfaces. The real accelerant is MCP, the model context protocol that standardizes connections across Google Drive, Slack, databases, and more. Like USB for AI, MCP makes integration simple and powerful, but a single prompt injection can pivot across everything the agent can reach. Security gets messier with supply chain compromise. A China‑nexus campaign allegedly hijacked the Notepad++ update mechanism, handing a bespoke backdoor to developers who did the right thing. We unpack how to keep patching while reducing risk: signed updates, independent checksum checks, tight egress policies for updaters, and strong monitoring around update flows. On the policy front, Rhode Island’s vendor transparency rule forces companies to name who buys data. It is a nutrition label for privacy, and it lets users and watchdogs finally connect the dots between friendly interfaces and aggressive brokers. We close with concrete defenses that raise the floor. Move high‑value accounts to FIDO2 hardware keys or platform passkeys to block phishing at the protocol level. Scope agent permissions narrowly, isolate MCP connectors by function, and require explicit approvals for sensitive actions. Log everything an agent touches and review those trails. Autonomy should be earned, minimal, and observable. If AI is going to act on your behalf, it must prove itself at every step. If this conversation helps you think differently about agents, influence mapping, and how to lock down your stack, subscribe, share with a teammate, and leave a quick review telling us the one control you plan to implement this week. Support the show

    26 min
  2. 12/15/2025

    S6, E262 - WARNER BROS CRISIS: Class Action Lawsuit & The $108B Hostile Takeover (Dec 15 Update)

    Send a text It is Monday, December 15th, and the battle for Hollywood has officially gone nuclear. What started as an $82 billion acquisition by Netflix has morphed into a $108 billion hostile takeover battle with Paramount Skydance. As of this morning, stocks are volatile, the government has frozen the deal, and a massive Class Action Lawsuit has just been filed to burn it all down. In this Special Report from Privacy Please, we break down the chaos of the last 72 hours. We uncover the "National Security" weapon Netflix is using to kill the deal, the foreign money backing Paramount, and the leaked memos that reveal why executives are selling you out. No matter who wins—the Algorithm or the Oligarchs—your privacy is the casualty. Time Stamps / Key Moments: 0:00 - Monday Morning Chaos: Stocks Halted & The $108B Counter-Bid 2:15 - Future A vs. Future B: The Algorithm Era vs. The Oligarch Era 5:30 - BREAKING: The "National Security" Argument & Class Action Lawsuit 8:45 - Leaked Memos: The "Golden Parachute" Betrayal 11:20 - The Fallout: Why Streaming Prices Will Hit $35/Month What you'll uncover in this deep dive: The Weekend of Chaos: A complete timeline of how Netflix lost control of the deal over the weekend. The "Foreign Money" Threat: Why Paramount's backing by sovereign wealth funds has regulators panicked. Netflix's Hypocrisy: How the surveillance giant is weaponizing "privacy" to stop their competitors. The Consumer Cost: Why the era of cheap streaming is officially dead. Join the Community: We are building a community dedicated to navigating these complex digital issues. Website & Newsletter: https://www.theproblemlounge.com Support the Show: http://buzzsprout.com/622234/support Don't forget to Like, Comment, and Subscribe! Your support helps us uncover the stories Big Tech wants to hide. #WarnerBros #Netflix #Paramount #StreamingWars #PrivacyPlease #Antitrust #FTC #DataPrivacy #Hollywood #BreakingNews #ClassAction #StockMarket Support the show

    8 min
  3. 11/17/2025

    S6, E260 - How Digital Therapy is Changing Mental Health (and Privacy) Forever

    Send a text A sleepless night, a soft prompt, and a flood of relief—the rise of AI therapy and companion apps is rewriting how we seek comfort when it matters most. We explore why these tools feel so human and so helpful, and what actually happens to the raw, intimate data shared in moments of vulnerability. From CBT-style exercises to memory-rich chat histories, the promise is powerful: instant support, lower cost, and zero visible judgment. The tradeoff is less visible but just as real—monetization models that thrive on sensitive inputs, “anonymized” data that can often be re-identified, and breach risks that turn private confessions into attack surfaces. We dig into the ethical edge: can a language model provide mental health care, or does it simulate empathy without the duty of care? We look at misinformation, hallucinated advice, and the way overreliance on AI can delay genuine human connection and professional help. The legal landscape lags behind the technology, with HIPAA often out of scope and accountability unclear when harm occurs. Still, there are practical ways to reduce exposure without forfeiting every benefit. We walk through privacy policies worth reading, data controls worth using, and signs that an app takes security seriously, from encryption to third‑party audits. Most of all, we focus on agency. Use AI for structure, journaling, and small reframes; lean on people for crisis, nuance, and real relationship. Create boundaries for what you share, separate identities when possible, and revisit whether a tool is helping you act or just keeping you company. If you’ve ever confided in a bot at 2 a.m., this conversation gives you the context and steps to stay safer while still finding support. If it resonates, subscribe, share with a friend who might need it, and leave a review to help others find the show. Support the show

    18 min
  4. 10/23/2025

    S6, E258 - The Synthetic Star: The AI Influencer Earning More Than You

    Send a text She has millions of followers, lands six-figure brand deals, and lives a life of curated perfection. The only catch? She isn't real. She was entirely created by artificial intelligence. Welcome to the unsettling world of synthetic influencers. In this compelling episode of Privacy Please, we dive deep into the booming industry of AI-generated online personalities. Discover: The Technology: How advanced AI image generators, 3D modeling, and Large Language Models combine to create hyper-realistic avatars and their compelling "personalities."The Business Case: Why major brands and marketing agencies are investing millions in digital beings that offer total control, scalability, and no risk of scandal.The Privacy & Ethical Dilemmas: We explore the "uncanny valley" of trust, the impact of deception by design, the new extremes of unrealistic beauty standards, and the potential for these AI personas to be used for sophisticated scams or propaganda.The Future of Authenticity: What does the rise of the synthetic star mean for human creativity, genuine connection, and the very definition of "real" in our digital world?It's a future that's already here, shaping what we see, what we buy, and even what we believe. Key Topics Covered: What are virtual/synthetic influencers?Examples: Lil Miquela, Aitana Lopez, Shudu GramAI technologies used: image generation, 3D modeling, LLMsReasons for their rise: control, cost, scalability, data collectionEthical concerns: deception, parasocial relationships with AIImpacts: unrealistic standards, displacement of human creators, potential for malicious use (scams, propaganda)Debate around regulation and disclosure for AI-generated contentThe future of authenticity and trust onlineConnect with Privacy Please: Website: theproblemlounge.comYouTube: https://www.youtube.com/@privacypleasepodcastSocial Media:LinkedIn: https://www.linkedin.com/company/problem-lounge-networkResources & Further Reading (Sources Used / Suggested): Federal Trade Commission (FTC):Guidelines on disclosure for influencers (relevant for future AI disclosure discussions)Academic Research:Studies on parasocial relationships with media figures (can be applied to AI)Research on the ethics of AI and synthetic media.Industry Insights:Reports from marketing agencies on virtual influencer trendsArticles from tech publications (e.g., Wired, The Verge, MIT Tech Review) covering Lil Miquela and similar figures.Support the show

    13 min
4.7
out of 5
29 Ratings

About

Welcome to "Privacy Please," a podcast for anyone who wants to know more about data privacy and security. Join your hosts Cam and Gabe as they talk to experts, academics, authors, and activists to break down complex privacy topics in a way that's easy to understand. In today's connected world, our personal information is constantly being collected, analyzed, and sometimes exploited. We believe everyone has a right to understand how their data is being used and what they can do to protect their privacy.Please subscribe and help us reach more people! This podcast is part of The Problem Lounge network — conversations about the problems shaping our world, from digital privacy to everyday life.