The AI, Privacy, and Security Weekly Update

R. Prescott Stearns Jr.

Into year 7 for this award-winning, light-hearted, lightweight AI privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist. For season 7, we've renamed the IT Privacy and Security Weekly Update to the AI, Privacy, and Security Weekly Update to better reflect the content. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current AI privacy and security stories from around the world to help you improve the management of your own privacy and security.

  1. Episode 285.5 Deep Dive. Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026

    5D AGO ·  BONUS

    Episode 285.5 Deep Dive. Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026

    The Deep Dive for Episode 285.5 explores how patience has become a defining weapon in modern AI, privacy, and security threats. State-backed actors like Red Menshen are quietly compromising telecom infrastructure with stealthy kernel-level implants, turning networks into long-term surveillance platforms while remaining almost invisible. Social engineering is evolving too: campaigns like ClickFix prove that attackers no longer need exotic exploits when they can simply coach users into pasting malicious commands themselves. At the same time, the AI software ecosystem is showing its fragility, as the LiteLLM supply-chain scare demonstrates how a single compromised package can ripple across countless downstream systems. On the frontier-model side, Anthropic’s leaked “step change” system underscores how rapidly capabilities are accelerating while governance and operational controls struggle to keep pace. Research on AI essay grading highlights a similar misalignment, showing that LLM-based evaluators often reward surface polish over genuine understanding, raising serious concerns for any high-stakes use of automated assessment. Governments are moving to assert control: the US Department of Defense is driving AI vendors toward a single baseline that prioritizes military requirements, while China’s latest Five‑Year Plan positions AI as an instrument of national power, emphasizing large-scale deployment, self-reliance, and ecosystem-level strategy. Finally, the Meta–Manus standoff illustrates how cross-border AI deals sit at the intersection of innovation, capital, and state control, turning corporate decisions into geopolitical flashpoints. Taken together, this episode illustrates that we are not just watching a tech race, but a slow, methodical restructuring of global power through technology, one that rewards deep security, thoughtful governance, and a healthy respect for the risks of quiet, patient adversaries.

    45 min
  2. 6D AGO

    Patience and the AI, Privacy, and Security Weekly Update for the Week Ending March 31st., 2026

    Episode 285.  This week, we uncover some long-term offensive strategies that show the virtue of patience can have a negative impact on the victims. A China-aligned threat group is quietly weaponizing telecom infrastructure with kernel-level backdoors, turning carriers into long-term strategic listening posts. A low-tech but highly effective social engineering campaign is turning everyday users into their own worst enemy by coaching them to execute the attacker's commands. A popular AI gateway narrowly avoided a cascading supply-chain breach after compromised packages exposed just how fragile modern dependency chains have become. A leaked cache of internal documents has forced Anthropic to confirm a powerful new model, spotlighting both its rapid progress and the operational risks of secrecy at scale. New research shows that AI graders systematically diverge from human judgment, rewarding polish over depth and raising red flags for automated assessment in high-stakes settings. The US Defense Department is pushing AI vendors onto a single contractual and ethical footing, signaling that military requirements will increasingly define how models can be used. China’s latest Five-Year Plan elevates AI from a growth priority to a full-spectrum instrument of national power, blending industrial policy with geopolitical strategy. And finally.. The Meta–Manus deal has evolved into a geopolitical flashpoint, illustrating how cross-border AI acquisitions can collide head-on with state control and national security anxieties. You don’t even have to be patient with these discoveries.  Let’s go! Find the full transcript to this podcast here.

    21 min
  3. Episode 284.5. Deep Dive. Sold Out. The AI, Privacy, and Security Weekly Update for the week ending March 24th., 2026

    MAR 26 ·  BONUS

    Episode 284.5. Deep Dive. Sold Out. The AI, Privacy, and Security Weekly Update for the week ending March 24th., 2026

    The technology landscape has shifted so profoundly that “IT risk” no longer captures current threats. This publication is now the AI, Privacy and Security Weekly Update, reflecting the reality that AI drives both innovation and adversary tactics. Episode 284 (week ending March 24, 2026) covers a surge of AI-driven developments, from autonomous malware to expanding federal data systems, marking the formal start of the surveillance era. The New Surveillance Perimeter: Government Data Aggregation A centralized AI “intelligence layer” is forming to map daily life with precision. Federal Consolidation: Internal reports reveal a proposed U.S. system combining immigration, financial, and biometric data into an AI-searchable database. Warrantless Access: FBI Director Kash Patel confirmed resumption of buying commercial location data from brokers. The Upshot: This circumvents Fourth Amendment protections, enabling mass monitoring without individual warrants. The aggregation of sensitive datasets creates persistent “mission creep” and critical single points of failure for the software supply chain. Autonomous Threats and Supply Chain Integrity Adversaries now deploy self-propagating, automated infection loops that exploit development infrastructure. CanisterWorm: Compromised credentials in Trivy propagated malware across 47 npm packages, harvesting tokens to spread autonomously. Open-Source Sabotage: A related campaign weaponized open-source libraries to erase data on systems in Iran. The Upshot: One stolen credential can now trigger a self-sustaining breach. Security strategy must extend beyond networks to verify every automated dependency. Infrastructure Vulnerabilities and State Control Connectivity itself is becoming a tool of control,and a potential systemic failure. Strategic Disruption: Russia’s mobile internet outages illustrate “digital crackdowns.” Local businesses now lobby to restore access to foreign apps like Telegram and WhatsApp vital for global communication. IoT Lockouts: A cyberattack on Intoxalock disabled 150,000 court-mandated breathalyzers, stranding drivers. The Upshot: Cloud dependence in IoT and politically constrained connectivity expose how fragile digital infrastructure has become for both citizens and commerce. Technological Shifts: Automation and Corporate Responsibility By 2027, automated bot traffic will outpace human activity, forcing a transition from cybersecurity to automation management. Hardware like Intel’s “Heracles” chip increases Fully Homomorphic Encryption (FHE) speed 5,000-fold, enabling encrypted computation at scale. Yet “trust bombs” persist: H&R Block: Installed a root certificate (expiring 2049) with its private key embedded, allowing forged secure sites. Bucketsquatting: AWS closed a loophole letting attackers hijack deleted S3 bucket names. Privacy Push: The FCC banned new foreign-made routers over national security risks, and Mozilla introduced a 50GB/month VPN for Firefox to make privacy default. The Upshot: As automated and AI-driven activity dominates the internet, privacy and trust have become core business imperatives,no longer optional features but essential components of market credibility. We hope you enjoyed this week's update and look forward to sharing more AI, Privacy, and Security stories next week!

    22 min
  4. MAR 25

    Sold Out. The AI, Privacy, and Security Weekly Update for the week ending March 24th., 2026

    Episode 284. Yes, that's it. So much of what we cover is now AI-based that we're updating the Update to reflect that. From today, the IT Privacy and Security Weekly Update will be formally renamed the AI, Privacy, and Security Weekly Update. In this week’s update: The FBI has officially confirmed it is once again purchasing commercial location data to track American citizens, bypassing traditional warrant requirements. A newly revealed government proposal outlines plans for a single, AI-powered database containing detailed personal information on virtually every American. TikTok and Meta’s advertising pixels are quietly collecting far more sensitive personal and behavioral data than most websites and users realize. A major cyberattack on Intoxalock has left thousands of drivers unable to start their court-ordered breathalyzer-equipped vehicles. H&R Block’s tax preparation software has been found to install a long-lived root certificate with its private key exposed, creating a serious security risk that can persist for decades. The FCC has banned imports of all new foreign-made consumer routers, citing severe national security risks posed by devices predominantly manufactured in China. Cloudflare’s CEO predicts that by 2027, AI-driven bot traffic will surpass human-generated internet traffic for the first time in history. Mozilla is rolling out a free built-in VPN in Firefox 149, initially available to users in the US, France, Germany, and the UK. Come on, let’s learn a little about what’s being sold around us!

    19 min
  5. Ep 282. Deep Dive. Invisible Signals and the IT Privacy and Security Weekly Update for the Week ending March 10th 2026.

    MAR 12 ·  BONUS

    Ep 282. Deep Dive. Invisible Signals and the IT Privacy and Security Weekly Update for the Week ending March 10th 2026.

    This week’s deep dive explores a powerful theme shaping the modern threat landscape: invisible signals. From the devices we wear and drive to the AI systems we increasingly rely on, our technology is constantly emitting data — sometimes to protect us, sometimes to expose us. We begin with a new Android app called Nearby Glasses, designed to alert users when smart glasses like Meta’s Ray-Bans are detected nearby via Bluetooth manufacturer identifiers. It’s a citizen-built countermeasure to always-on wearable cameras, highlighting rising tensions between convenience and consent in public spaces. Next, we examine research showing that tire pressure monitoring systems (TPMS), mandatory in U.S. vehicles since 2007, broadcast unencrypted, persistent identifiers. Researchers captured millions of signals and demonstrated how vehicles can be passively tracked using inexpensive radio equipment. No hacking required — just poorly designed IoT architecture turning cars into rolling beacons. From physical signals to digital footprints, a new study reveals that AI can deanonymize social media users by correlating small details across platforms. What once required nation-state resources can now be done with commodity large language models, fundamentally challenging the concept of online anonymity. We then dive into the “Truman Show” investment scam — a sophisticated fraud operation that uses AI-generated personas, fake group chats, fabricated media coverage, and sham trading apps to create a fully immersive illusion of legitimacy. Rather than stealing trust directly, scammers now manufacture entire digital realities where trust feels inevitable. AI agents themselves are also reshaping security assumptions. Modern assistants can access files, write code, and interact with online services using a user’s privileges. Researchers warn that prompt injection attacks — hidden malicious instructions embedded in content — can manipulate these agents into leaking data or performing harmful actions. When AI combines sensitive access, untrusted input, and outbound communication, it becomes a new form of insider risk. That risk was underscored by the OpenClaw vulnerability, which allowed malicious web pages to brute-force a local AI agent gateway and potentially hijack it. The lesson: “local” no longer means secure. Any system with elevated privileges must be treated as a governed identity. On the defensive side, AI is accelerating security improvements. Anthropic used a large language model to analyze Firefox’s codebase, identifying over 100 flaws in two weeks, including 22 confirmed security bugs. AI is compressing months of review into days — but the same acceleration applies to attackers. Finally, Operation Candy in Sweden demonstrates how digital evidence can unravel vast criminal networks. Two seized phones exposed an international drug and money laundering operation spanning multiple continents, proving that even small data points can collapse large hidden systems. Zooming out, the pattern is clear: wearables broadcast presence, cars broadcast identity, AI strips away anonymity, scams construct synthetic realities, assistants act autonomously, and devices quietly record history. Signals are everywhere — visible and invisible — and AI is amplifying their impact. The question is no longer whether your technology emits signals. It’s who is listening — and whether they’re protecting you or profiling you.

    33 min
4.5
out of 5
4 Ratings

About

Into year 7 for this award-winning, light-hearted, lightweight AI privacy and security podcast that spans the globe in terms of issues covered, with topics that draw in everyone from executive to newbie, to tech specialist. For season 7, we've renamed the IT Privacy and Security Weekly Update to the AI, Privacy, and Security Weekly Update to better reflect the content. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current AI privacy and security stories from around the world to help you improve the management of your own privacy and security.

You Might Also Like