The AI, Privacy, and Security Weekly Update

R. Prescott Stearns Jr.

Into year 7 for this award-winning, light-hearted, lightweight AI privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist. For season 7, we've renamed the IT Privacy and Security Weekly Update to the AI, Privacy, and Security Weekly Update to better reflect the content. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current AI privacy and security stories from around the world to help you improve the management of your own privacy and security.

  1. EP 289. Deep Dive.. Everything looked fine. The A.I., Privacy and Security Weekly update for the week ending April 27th 2026

    9 HR AGO ·  BONUS

    EP 289. Deep Dive.. Everything looked fine. The A.I., Privacy and Security Weekly update for the week ending April 27th 2026

    Warren Buffett once said it's only when the tide goes out that you discover who's been swimming naked. This week, the tide went out on several fronts simultaneously, and what it revealed was uncomfortable, instructive, and in some cases, long overdue. France opened the week with a breach that should trouble every government running centralised identity infrastructure. Up to 19 million records tied to passports, ID cards, and driver's licenses are now circulating on criminal forums. What makes this worse than a typical data leak is the context: a similar dataset from the same agency surfaced in 2025. This wasn't a surprise attack on a hardened target. It was a recurring failure wearing the face of a solved problem. The Bitwarden supply chain story carried a similar energy. No vaults were cracked, no passwords were stolen, and most users never noticed a thing. But a malicious package briefly moved through npm as part of the Checkmarx campaign, targeting the developers who build the software everyone else depends on. The lesson isn't technical — it's structural. Your security posture now extends to every build pipeline, every dependency, and every automation script upstream of your product. Then came FAST16.SYS, and the week shifted into something darker. This rootkit, which appears to predate Stuxnet, didn't steal data or trigger alarms. It quietly altered precision calculations in memory while leaving every file on disk untouched. Systems looked healthy. Outputs looked reasonable. The only thing wrong was the answer. It is the most patient form of sabotage imaginable, and it reframes what advanced threats are actually capable of when detection, not damage, is the real objective. AI brought its own escalation this week. Researchers are now using AI systems to attack other AI systems at machine speed — probing, learning, and refining exploits far faster than any human team. At the same time, agent browsers like Interceptor are quietly repositioning the browser itself as an autonomous actor, raising legitimate questions about oversight when software is doing the clicking, typing, and deciding on your behalf. Anthropic's Mythos model access story tied several threads together neatly. Contractor credentials, open-source reconnaissance, and data exposed in a third-party breach combined to give a small group access to a restricted model. The intent was curiosity, not sabotage — but the mechanism was a textbook illustration of how third-party access chains create exposure that principal organisations rarely see coming. Apple closed out the privacy section with a rare win, patching a logging bug that had been silently retaining Signal message fragments for up to a month — long after deletion, long after the app was removed. The FBI had already used it in court. The patch is clean and the fix is automatic, but the episode is a pointed reminder that ephemeral and permanent are closer together than most people assume. The week closed on strategy. OpenAI and Microsoft have restructured their foundational partnership, removing exclusivity and capping revenue payments. The AI infrastructure layer is becoming contested ground, and this deal confirms that no single partnership, however dominant it once appeared, is permanent. This week's stories didn't shout. They accumulated. And that, more than anything, is the point.

    31 min
  2. 1 DAY AGO

    Everything looked fine. The A.I., Privacy, and Security Weekly update for the week ending April 27th 2026

    EP 289. Let’s climb to the top of this week’s stories:France's most trusted identity infrastructure has become its biggest liability, and nineteen million citizens are now paying the price.The real lesson from Bitwarden's close call isn't about passwords it's about how quietly an attack can move through the software you never see being built.A newly uncovered rootkit predating Stuxnet has rewritten what we thought we knew about state-level sabotage and its most dangerous feature was making everything look perfectly normal.The arms race in AI security has hit a new threshold machines are now the ones probing for weaknesses, and they don't need sleep to do it.The browser is no longer just a window to the web it's becoming an autonomous actor, and that changes everything about who's actually in control.A restricted AI model, a contractor's borrowed credentials, and a private Discord channel Anthropic's Mythos access story is a case study in how third-party trust becomes a front door.A logging bug quietly turned one of the world's most trusted encrypted messaging apps into an inadvertent evidence locker and it took an FBI courtroom testimony to bring it to light.OpenAI and Microsoft have redrawn the map of AI's most consequential partnership, and the shift from exclusivity to optionality signals a new phase in who controls the infrastructure layer.Tighten your shoelaces, and let’s get to the bottom of this. Find this week's transcript here.

    19 min
  3. The Anthropic Privacy and Security issues.

    21 APR ·  BONUS

    The Anthropic Privacy and Security issues.

    Anthropic Claude Desktop Native Messaging Bridge - The Report (April 2026) Anthropic’s official Claude Desktop application (Electron-based, for macOS and Windows) automatically installs an undocumented Native Messaging host bridge during installation and on every launch. On macOS, it places a manifest file (com.anthropic.claude_browser_extension.json) and associated helper binary in the NativeMessagingHosts directories of seven Chromium-based browsers (Chrome, Edge, Brave, Arc, Vivaldi, Opera, and Chromium), even for browsers the user has not installed. On Windows, equivalent registry entries are created under the relevant browser keys. The bridge pre-authorizes specific Anthropic-controlled Chrome extension IDs to communicate directly with the desktop app via standard input/output, outside the browser sandbox. It runs with user-level privileges, is rewritten on each launch (making removal non-persistent), and is not mentioned in the installer, documentation, settings, or release notes. The same behavior occurs on Windows, though implemented via registry rather than filesystem manifests. thatprivacyguy.com Functionality Enabled The bridge supports Anthropic’s Claude Cowork (desktop agentic workflows) and Dispatch (remote task assignment from mobile). When activated by a compatible Claude browser extension, it enables high-fidelity browser automation, including: Direct DOM access and reading of page content Authenticated session sharing (using existing logins/cookies) Interactive control (form filling, clicking, navigation, scrolling) Data extraction and multi-step web workflows Session recording as GIFs This provides a more reliable and precise alternative to screenshot-based “computer use” for web tasks, allowing Claude to act as a seamless “digital coworker” on real browser sessions without constant manual intervention or context switching. pluto.security Why Anthropic Is Taking This Approach Anthropic is prioritizing frictionless, agentic AI capabilities to make Claude more useful for productivity and automation. By pre-registering the bridge, the company ensures immediate availability of browser integration for users, enabling Cowork/Dispatch features, without requiring separate manual extension setup or configuration steps. This design choice supports their vision of Claude as an autonomous assistant capable of handling real-world web-based work (e.g., data aggregation, form handling, testing) across common browsers. The implementation is cross-platform and persistent to maintain a consistent, “always-ready” experience. However, it has drawn criticism for lacking transparency, explicit user consent, and documentation, as well as for modifying other vendors’ application directories and creating potential security surface area (e.g., prompt-injection risks once activated). As of 21 April 2026, Anthropic has not issued a public response to the report. The approach reflects a common industry tension: balancing powerful AI agent functionality with user control and privacy expectations. Users concerned about the bridge can manually remove the manifests/registry entries, though the app may recreate them on relaunch.

    47 min
  4. Episode 287.5. Deep Dive. Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

    16 APR ·  BONUS

    Episode 287.5. Deep Dive. Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

    Cybersecurity is entering an “invisibility crisis,” where threats are no longer loud, external attacks but subtle abuses of normal system behavior. Techniques like SockStress exploit TCP assumptions to drain resources, residential proxy networks turn everyday users into unwitting infrastructure, and fake VPNs weaponize trust to exfiltrate data. Even ransomware response processes are being hijacked, transforming incident response into an attack surface. At the same time, transparency mechanisms are failing—Google, Meta, and Microsoft frequently ignore user opt-outs—highlighting a systemic breakdown in consent and accelerating calls for digital sovereignty. This shift feeds directly into geopolitics. Nations increasingly view reliance on foreign technology as a strategic risk, pushing “digital sovereignty” agendas. France, for example, is migrating government systems to domestic or open-source alternatives like Linux and Jitsi, and relocating sensitive health data infrastructure. Meanwhile, advanced AI proliferation introduces a paradox: companies restrict powerful models to prevent misuse, yet real-world breaches—such as the Tianjin Supercomputer incident, where attackers exfiltrated 10 petabytes via a compromised VPN—demonstrate how stealthy, persistent threats can evade detection at scale. Critical infrastructure remains especially vulnerable. Iran-linked actors have targeted industrial control systems (PLCs), showing how cyber intrusions can translate into physical manipulation. The message is clear: internet-connected industrial systems must adopt stronger controls, including multifactor authentication and continuous monitoring, particularly across energy and water sectors. Alongside these risks, the workforce itself is transforming. AI is shifting human roles from execution to oversight—people increasingly “direct” rather than “do.” However, this creates a paradox: while AI boosts productivity, it also increases complexity, oversight demands, and cognitive load. Managers now supervise fleets of AI agents, and professionals often refine AI outputs instead of producing original work. Despite widespread tech layoffs, judgment, accountability, and problem framing are becoming the most valuable—and scarce—skills. The broader theme is one of diminishing visibility and control. Whether in cybersecurity, geopolitics, or labor, systems are becoming more opaque, automated, and interdependent. Even efforts to uncover foundational truths—like identifying Satoshi Nakamoto—remain inconclusive despite advanced analysis. In this environment, the key differentiator is no longer technical capability alone, but human judgment: the ability to question assumptions, verify continuously, and navigate a world where the greatest risks are hidden in plain sight.

    37 min
  5. 15 APR

    Taxing. The AI, Privacy, and Security Weekly Update for the Week ending April 14th 2026

    Episode 287. On the day before the tax deadline in the US, we’ve got the most taxing update yet, full of unexpected deductions: OpenAI has unveiled bold policy recommendations to cushion the societal impact of advanced AI, including robot taxes, a public wealth fund, and trials of a four-day workweek.  Add in cake for all, and we’d swear Marie Antoinette was running the company. As AI assumes more decision-making roles, human work is evolving from task execution to high-level direction, judgment, and problem framing. Hopefully, there’s still time to talk to your school’s guidance counselor about changing your major. Professionals are now building personal “AI teams” of multiple specialized agents, dramatically expanding individual capacity while reshaping workloads and expectations. Citing potential misuse risks, OpenAI is restricting access to its most powerful new cybersecurity model, following a cautious approach already adopted by Anthropic.  “It’s so good you can’t have it.” A hacker group known as “FlamingChina” claims to have exfiltrated over 10 petabytes of sensitive data from China’s National Supercomputing Center in Tianjin in one of the largest breaches on record. Iran-linked hackers have reportedly disrupted critical operational systems at U.S. oil, gas, and water facilities, in a demonstration of “You hit us, we hit you.” A new independent audit reveals that Google, Microsoft, and Meta shockingly continue tracking users even after privacy opt-out signals are enabled. The New York Times has published a detailed investigation naming British cryptographer Adam Back as the strongest circumstantial candidate yet to be Bitcoin’s mysterious creator, Satoshi Nakamoto.  Quick, now’s the time to get really friendly with Adam. And just like filing taxes, the sooner we get to it, the sooner we get our refund!  Let’s go!

    23 min
  6. Episode 286.5. The Deep Dive. Subliminal Learning with the AI, Security, and Privacy Weekly Update for the Week ending April 7th, 2026

    9 APR ·  BONUS

    Episode 286.5. The Deep Dive. Subliminal Learning with the AI, Security, and Privacy Weekly Update for the Week ending April 7th, 2026

    First up, AI. You’d think if you clean your training data, you control what the model learns. Nope. Researchers just showed that models can pass hidden traits to each other through data that looks completely harmless. Like numbers. No obvious bias, no keywords, nothing. And the new model still picks up the same behavior. Even after you scrub it. Think of it like this. The data looks clean, but the intent is still in there, baked into the structure. So now we have AI systems where you can’t fully prove what they learned. You can test outputs, sure, but you can’t audit the mind. That’s a supply chain problem. Next, LinkedIn. You know how you log in and think you’re just updating your resume? Turns out they may have been scanning your browser for extensions. Thousands of them. And extensions tell a story. Health apps, finance tools, job search plugins, political stuff. That’s basically your personality in JSON form. LinkedIn says it’s for security. Maybe. But the bigger lesson is this: your browser is now part of your identity surface. Not just what you do online, but what you’ve installed. Now let’s talk about your fridge. Yes, your fridge. Samsung pushed ads onto $2,000 refrigerators. After people bought them. So now your kitchen appliance is also an ad platform. You didn’t opt in, you just got updated. Same play with TVs. Walmart bought Vizio, and now some TVs require a Walmart account to work properly. Why? Because the TV isn’t the product. The data is. What you watch plus what you buy equals a very valuable profile. Software side, GitHub is exploding. We’re talking billions of commits. AI is helping people write code faster than ever. Sounds great until you realize nobody is reviewing most of it. More code means more bugs, more vulnerabilities, more weird dependencies sneaking in. Speed went up. Assurance did not. Then quantum computing. This one matters. We used to think breaking encryption would take millions of qubits. Now researchers are saying maybe ten thousand. That’s a huge shift. Not tomorrow, but not “someday” either. And here’s the kicker. If someone is recording encrypted traffic today, they can just sit on it and decrypt it later when the tech catches up. So anything that needs to stay secret for a long time is already at risk. Zooming out, AI investment is basically all happening in the US. Like almost all of it. That means one country is setting the pace, the standards, and the rules. Everyone else is kind of along for the ride. That’s not just business, that’s geopolitics. And finally, the courts are waking up. For years, platforms said “we don’t control the content.” Now judges are saying, “yeah, but you built the machine that decides what people see.” That’s a big shift. Algorithms are starting to look like products with liability. So the theme this week is simple. The real risks aren’t obvious anymore. They’re hidden in training data, in your browser, in your appliances, in algorithms making decisions you don’t see. Which means you don’t just ask what the system does. You ask what’s underneath it.

    32 min

About

Into year 7 for this award-winning, light-hearted, lightweight AI privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist. For season 7, we've renamed the IT Privacy and Security Weekly Update to the AI, Privacy, and Security Weekly Update to better reflect the content. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current AI privacy and security stories from around the world to help you improve the management of your own privacy and security.

You Might Also Like