The AI, Privacy, and Security Weekly Update

R. Prescott Stearns Jr.

Into year 7 for this award-winning, light-hearted, lightweight AI privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist. For season 7, we've renamed the IT Privacy and Security Weekly Update to the AI, Privacy, and Security Weekly Update to better reflect the content. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current AI privacy and security stories from around the world to help you improve the management of your own privacy and security.

  1. The Anthropic Privacy and Security issues.

    1 DAY AGO ·  BONUS

    The Anthropic Privacy and Security issues.

    Anthropic Claude Desktop Native Messaging Bridge - The Report (April 2026) Anthropic’s official Claude Desktop application (Electron-based, for macOS and Windows) automatically installs an undocumented Native Messaging host bridge during installation and on every launch. On macOS, it places a manifest file (com.anthropic.claude_browser_extension.json) and associated helper binary in the NativeMessagingHosts directories of seven Chromium-based browsers (Chrome, Edge, Brave, Arc, Vivaldi, Opera, and Chromium), even for browsers the user has not installed. On Windows, equivalent registry entries are created under the relevant browser keys. The bridge pre-authorizes specific Anthropic-controlled Chrome extension IDs to communicate directly with the desktop app via standard input/output, outside the browser sandbox. It runs with user-level privileges, is rewritten on each launch (making removal non-persistent), and is not mentioned in the installer, documentation, settings, or release notes. The same behavior occurs on Windows, though implemented via registry rather than filesystem manifests. thatprivacyguy.com Functionality Enabled The bridge supports Anthropic’s Claude Cowork (desktop agentic workflows) and Dispatch (remote task assignment from mobile). When activated by a compatible Claude browser extension, it enables high-fidelity browser automation, including: Direct DOM access and reading of page content Authenticated session sharing (using existing logins/cookies) Interactive control (form filling, clicking, navigation, scrolling) Data extraction and multi-step web workflows Session recording as GIFs This provides a more reliable and precise alternative to screenshot-based “computer use” for web tasks, allowing Claude to act as a seamless “digital coworker” on real browser sessions without constant manual intervention or context switching. pluto.security Why Anthropic Is Taking This Approach Anthropic is prioritizing frictionless, agentic AI capabilities to make Claude more useful for productivity and automation. By pre-registering the bridge, the company ensures immediate availability of browser integration for users, enabling Cowork/Dispatch features, without requiring separate manual extension setup or configuration steps. This design choice supports their vision of Claude as an autonomous assistant capable of handling real-world web-based work (e.g., data aggregation, form handling, testing) across common browsers. The implementation is cross-platform and persistent to maintain a consistent, “always-ready” experience. However, it has drawn criticism for lacking transparency, explicit user consent, and documentation, as well as for modifying other vendors’ application directories and creating potential security surface area (e.g., prompt-injection risks once activated). As of 21 April 2026, Anthropic has not issued a public response to the report. The approach reflects a common industry tension: balancing powerful AI agent functionality with user control and privacy expectations. Users concerned about the bridge can manually remove the manifests/registry entries, though the app may recreate them on relaunch.

    47 min
  2. Episode 287.5. Deep Dive. Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

    6 DAYS AGO ·  BONUS

    Episode 287.5. Deep Dive. Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

    Cybersecurity is entering an “invisibility crisis,” where threats are no longer loud, external attacks but subtle abuses of normal system behavior. Techniques like SockStress exploit TCP assumptions to drain resources, residential proxy networks turn everyday users into unwitting infrastructure, and fake VPNs weaponize trust to exfiltrate data. Even ransomware response processes are being hijacked, transforming incident response into an attack surface. At the same time, transparency mechanisms are failing—Google, Meta, and Microsoft frequently ignore user opt-outs—highlighting a systemic breakdown in consent and accelerating calls for digital sovereignty. This shift feeds directly into geopolitics. Nations increasingly view reliance on foreign technology as a strategic risk, pushing “digital sovereignty” agendas. France, for example, is migrating government systems to domestic or open-source alternatives like Linux and Jitsi, and relocating sensitive health data infrastructure. Meanwhile, advanced AI proliferation introduces a paradox: companies restrict powerful models to prevent misuse, yet real-world breaches—such as the Tianjin Supercomputer incident, where attackers exfiltrated 10 petabytes via a compromised VPN—demonstrate how stealthy, persistent threats can evade detection at scale. Critical infrastructure remains especially vulnerable. Iran-linked actors have targeted industrial control systems (PLCs), showing how cyber intrusions can translate into physical manipulation. The message is clear: internet-connected industrial systems must adopt stronger controls, including multifactor authentication and continuous monitoring, particularly across energy and water sectors. Alongside these risks, the workforce itself is transforming. AI is shifting human roles from execution to oversight—people increasingly “direct” rather than “do.” However, this creates a paradox: while AI boosts productivity, it also increases complexity, oversight demands, and cognitive load. Managers now supervise fleets of AI agents, and professionals often refine AI outputs instead of producing original work. Despite widespread tech layoffs, judgment, accountability, and problem framing are becoming the most valuable—and scarce—skills. The broader theme is one of diminishing visibility and control. Whether in cybersecurity, geopolitics, or labor, systems are becoming more opaque, automated, and interdependent. Even efforts to uncover foundational truths—like identifying Satoshi Nakamoto—remain inconclusive despite advanced analysis. In this environment, the key differentiator is no longer technical capability alone, but human judgment: the ability to question assumptions, verify continuously, and navigate a world where the greatest risks are hidden in plain sight.

    37 min
  3. 15 APR

    Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

    Episode 287. On the day before the tax deadline in the US, we’ve got the most taxing update yet, full of unexpected deductions: OpenAI has unveiled bold policy recommendations to cushion the societal impact of advanced AI, including robot taxes, a public wealth fund, and trials of a four-day workweek.  Add in cake for all, and we’d swear Marie Antoinette was running the company. As AI assumes more decision-making roles, human work is evolving from task execution to high-level direction, judgment, and problem framing. Hopefully, there’s still time to talk to your school’s guidance counselor about changing your major. Professionals are now building personal “AI teams” of multiple specialized agents, dramatically expanding individual capacity while reshaping workloads and expectations. Citing potential misuse risks, OpenAI is restricting access to its most powerful new cybersecurity model, following a cautious approach already adopted by Anthropic.  “It’s so good you can’t have it.” A hacker group known as “FlamingChina” claims to have exfiltrated over 10 petabytes of sensitive data from China’s National Supercomputing Center in Tianjin in one of the largest breaches on record. Iran-linked hackers have reportedly disrupted critical operational systems at U.S. oil, gas, and water facilities, in a demonstration of “You hit us, we hit you.” A new independent audit reveals that Google, Microsoft, and Meta shockingly continue tracking users even after privacy opt-out signals are enabled. The New York Times has published a detailed investigation naming British cryptographer Adam Back as the strongest circumstantial candidate yet to be Bitcoin’s mysterious creator, Satoshi Nakamoto.  Quick, now’s the time to get really friendly with Adam. And just like filing taxes, the sooner we get to it, the sooner we get our refund!  Let’s go!

    23 min
  4. Episode 286.5. The Deep Dive. Subliminal Learning with the AI, Security, and Privacy Weekly Update for the Week ending April 7th, 2026

    9 APR ·  BONUS

    Episode 286.5. The Deep Dive. Subliminal Learning with the AI, Security, and Privacy Weekly Update for the Week ending April 7th, 2026

    First up, AI. You’d think if you clean your training data, you control what the model learns. Nope. Researchers just showed that models can pass hidden traits to each other through data that looks completely harmless. Like numbers. No obvious bias, no keywords, nothing. And the new model still picks up the same behavior. Even after you scrub it. Think of it like this. The data looks clean, but the intent is still in there, baked into the structure. So now we have AI systems where you can’t fully prove what they learned. You can test outputs, sure, but you can’t audit the mind. That’s a supply chain problem. Next, LinkedIn. You know how you log in and think you’re just updating your resume? Turns out they may have been scanning your browser for extensions. Thousands of them. And extensions tell a story. Health apps, finance tools, job search plugins, political stuff. That’s basically your personality in JSON form. LinkedIn says it’s for security. Maybe. But the bigger lesson is this: your browser is now part of your identity surface. Not just what you do online, but what you’ve installed. Now let’s talk about your fridge. Yes, your fridge. Samsung pushed ads onto $2,000 refrigerators. After people bought them. So now your kitchen appliance is also an ad platform. You didn’t opt in, you just got updated. Same play with TVs. Walmart bought Vizio, and now some TVs require a Walmart account to work properly. Why? Because the TV isn’t the product. The data is. What you watch plus what you buy equals a very valuable profile. Software side, GitHub is exploding. We’re talking billions of commits. AI is helping people write code faster than ever. Sounds great until you realize nobody is reviewing most of it. More code means more bugs, more vulnerabilities, more weird dependencies sneaking in. Speed went up. Assurance did not. Then quantum computing. This one matters. We used to think breaking encryption would take millions of qubits. Now researchers are saying maybe ten thousand. That’s a huge shift. Not tomorrow, but not “someday” either. And here’s the kicker. If someone is recording encrypted traffic today, they can just sit on it and decrypt it later when the tech catches up. So anything that needs to stay secret for a long time is already at risk. Zooming out, AI investment is basically all happening in the US. Like almost all of it. That means one country is setting the pace, the standards, and the rules. Everyone else is kind of along for the ride. That’s not just business, that’s geopolitics. And finally, the courts are waking up. For years, platforms said “we don’t control the content.” Now judges are saying, “yeah, but you built the machine that decides what people see.” That’s a big shift. Algorithms are starting to look like products with liability. So the theme this week is simple. The real risks aren’t obvious anymore. They’re hidden in training data, in your browser, in your appliances, in algorithms making decisions you don’t see. Which means you don’t just ask what the system does. You ask what’s underneath it.

    32 min
  5. 8 APR

    Subliminal Learning with the AI, Security, and Privacy Weekly Update for the Week ending April 7th, 2026

    Episode 286  And have we got an update for you.  Focus on this: Researchers have discovered that AI models can be secretly shaped by their training data even after every suspicious signal has been scrubbed out, which raises an uncomfortable question: do we actually know what we've built? It turns out the most comprehensive profile LinkedIn has on you isn't the one you wrote yourself. Samsung would like you to know that the $2,000 refrigerator you just bought comes with one small surprise: a billboard. AI-assisted coding has pushed GitHub to a billion commits a year, which sounds like extraordinary progress right up until you ask who reviewed all of it. The encryption keeping your most sensitive data safe was designed for a quantum threat that was supposed to be decades away, and researchers just moved the deadline. Last year, the world invested $98 billion in AI, and if you're wondering where the other countries went, the answer is $1.9 billion split between all of them combined. Walmart bought Vizio in 2024, and this week, they quietly revealed what they actually purchased: not the screens, but the 20 million living rooms attached to them. For the first time in a major courtroom, a tech platform is being held liable not for what users posted, but for the machine that decided who should see it. For this update, let’s not go subliminal! Find the full transcript to this podcast here.

    23 min
  6. Episode 285.5 Deep Dive. Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026

    2 APR ·  BONUS

    Episode 285.5 Deep Dive. Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026

    The Deep Dive for Episode 285.5 explores how patience has become a defining weapon in modern AI, privacy, and security threats. State-backed actors like Red Menshen are quietly compromising telecom infrastructure with stealthy kernel-level implants, turning networks into long-term surveillance platforms while remaining almost invisible. Social engineering is evolving too: campaigns like ClickFix prove that attackers no longer need exotic exploits when they can simply coach users into pasting malicious commands themselves. At the same time, the AI software ecosystem is showing its fragility, as the LiteLLM supply-chain scare demonstrates how a single compromised package can ripple across countless downstream systems. On the frontier-model side, Anthropic’s leaked “step change” system underscores how rapidly capabilities are accelerating while governance and operational controls struggle to keep pace. Research on AI essay grading highlights a similar misalignment, showing that LLM-based evaluators often reward surface polish over genuine understanding, raising serious concerns for any high-stakes use of automated assessment. Governments are moving to assert control: the US Department of Defense is driving AI vendors toward a single baseline that prioritizes military requirements, while China’s latest Five‑Year Plan positions AI as an instrument of national power, emphasizing large-scale deployment, self-reliance, and ecosystem-level strategy. Finally, the Meta–Manus standoff illustrates how cross-border AI deals sit at the intersection of innovation, capital, and state control, turning corporate decisions into geopolitical flashpoints. Taken together, this episode illustrates that we are not just watching a tech race, but a slow, methodical restructuring of global power through technology, one that rewards deep security, thoughtful governance, and a healthy respect for the risks of quiet, patient adversaries.

    45 min
  7. 1 APR

    Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026

    Episode 285.  This week, we uncover some long-term offensive strategies that show the virtue of patience can have a negative impact on the victims. A China-aligned threat group is quietly weaponizing telecom infrastructure with kernel-level backdoors, turning carriers into long-term strategic listening posts. A low-tech but highly effective social engineering campaign is turning everyday users into their own worst enemy by coaching them to execute the attacker's commands. A popular AI gateway narrowly avoided a cascading supply-chain breach after compromised packages exposed just how fragile modern dependency chains have become. A leaked cache of internal documents has forced Anthropic to confirm a powerful new model, spotlighting both its rapid progress and the operational risks of secrecy at scale. New research shows that AI graders systematically diverge from human judgment, rewarding polish over depth and raising red flags for automated assessment in high-stakes settings. The US Defense Department is pushing AI vendors onto a single contractual and ethical footing, signaling that military requirements will increasingly define how models can be used. China’s latest Five-Year Plan elevates AI from a growth priority to a full-spectrum instrument of national power, blending industrial policy with geopolitical strategy. And finally.. The Meta–Manus deal has evolved into a geopolitical flashpoint, illustrating how cross-border AI acquisitions can collide head-on with state control and national security anxieties. You don’t even have to be patient with these discoveries.  Let’s go! Find the full transcript to this podcast here.

    21 min

About

Into year 7 for this award-winning, light-hearted, lightweight AI privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist. For season 7, we've renamed the IT Privacy and Security Weekly Update to the AI, Privacy, and Security Weekly Update to better reflect the content. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current AI privacy and security stories from around the world to help you improve the management of your own privacy and security.

You Might Also Like