The IT Privacy and Security Weekly Update.

R. Prescott Stearns Jr.

Into year five for this award-winning, light-hearted, lightweight IT privacy and security podcast that spans the globe in terms of issues covered with topics that draw in everyone from executive, to newbie, to tech specialist. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current IT privacy and security stories from around the world to help you improve the management of your own privacy and security.

  1. 1D AGO

    The Mistake Before the Break. The IT Privacy and Security Weekly Update for the week ending September 16th. 2025.

    EP 260 This is our last update before a two week break so we've packed it.We start with the devastating cyber attack on Jaguar Land Rover exposes the fragility of modern manufacturing, halting production and threatening the UK’s automotive supply chain.Russia’s state-backed Max messaging app, touted as secure, has become a breeding ground for scams, undermining user trust and safety.UK schools face a surge in cyber attacks driven by students exploiting weak credentials, revealing critical gaps in educational data security.A stolen iPhone sparked a security researcher’s investigation, dismantling a global criminal network profiting from phishing and device theft.Major US airlines are selling billions of passenger records to the government, enabling warrantless surveillance and raising privacy alarms.A federal court upholds a $46.9M fine against Verizon for illegally selling customer location data, reinforcing privacy protections.A third of UK employers deploy 'bossware' to monitor workers, sparking concerns over privacy and trust in the workplace.Undetected Chinese-made radios in US highway infrastructure raise alarms over potential remote tampering and data theft.Apple’s Memory Integrity Enforcement introduces robust protection against memory-based attacks, setting a new standard for device security.Google’s VaultGemma pioneers privacy-focused AI, leveraging differential privacy to safeguard user data in large language models.The AI Darwin Awards spotlight reckless AI deployments, from fast-food blunders to catastrophic data losses, it’s both entertaining and scary at the same time.Adventures await in the mistake before the break!

    25 min
  2. EP 259.5 Deep Dive. In the Picture with The IT Privacy and Security Weekly Update for the week ending September 9th. 2025

    6D AGO · BONUS

    EP 259.5 Deep Dive. In the Picture with The IT Privacy and Security Weekly Update for the week ending September 9th. 2025

    EP 259.5 The cybersecurity and technology threat landscape is accelerating in scale, sophistication, and impact. A convergence of AI-driven offensive capabilities, large-scale supply chain compromises, systemic insecurity in consumer devices, corporate data abuses, and state-level spyware deployment is reshaping digital risk. At the same time, new innovations—particularly in open-source, privacy-centric AI and smart home repurposing—highlight the dual-edged nature of technological progress. AI-Accelerated ExploitsAttackers now harness generative AI to automate exploit creation, compressing timelines from months to minutes. “Auto Exploit,” powered by Claude-sonnet-4.0, can produce functional PoC code for vulnerabilities in under 15 minutes at negligible cost, fundamentally shifting defensive priorities. The challenge is no longer whether a flaw is technically exploitable but how quickly exposure becomes weaponized. Massive Supply Chain AttacksSoftware ecosystems remain prime targets. A phishing campaign against a single npm maintainer led to malware injection into packages downloaded billions of times weekly, constituting the largest supply-chain attack to date. This demonstrates how a single compromised account can ripple globally across developers, enterprises, and end users. Weaponization of Benign FormatsAttackers increasingly exploit trusted file types. SVG-based phishing campaigns deliver malware through fake judicial portals, evading antivirus detection with obfuscation and dummy code. Over 500 samples were linked to one campaign, prompting Microsoft to disable inline SVG rendering in Outlook as a mitigation measure. Systemic Insecurity in IoTLow-cost consumer devices, particularly internet-connected surveillance cameras, ship with unpatchable flaws. Weak firmware, absent encryption, bypassable authentication, and plain-text data transmission expose users to surveillance rather than security. These systemic design failures create enduring vulnerabilities at scale. Corporate Breaches and Data AbuseThe Plex breach underscored the persistence of corporate data exposure, with compromised usernames and passwords requiring resets. Meanwhile, a federal jury fined Google $425.7M for secretly tracking 98M devices despite user privacy settings—reinforcing that legal and financial consequences for privacy violations are escalating, even if damages remain below consumer expectations. Government Spyware DeploymentCivil liberties are increasingly tested by state adoption of invasive surveillance tools. U.S. Immigration and Customs Enforcement resumed a $2M deal for Graphite spyware, capable of infiltrating encrypted apps and activating microphones. The contract proceeded after regulatory hurdles were bypassed through a U.S. acquisition of its Israeli parent company, raising alarms about due process, counterintelligence risks, and surveillance overreach. Emerging InnovationsNot all developments are regressive. Philips Hue’s “MotionAware” demonstrates benign repurposing of smart home technology, transforming bulbs into RF-based motion sensors with AI-powered interpretation. Meanwhile, Switzerland’s Apertus project launched an open-source LLM designed with transparency and privacy at its core—providing public access to weights, training data, and checkpoints, framing AI as digital infrastructure for the public good. The digital environment is marked by intensifying threats: faster, cheaper, and more pervasive attacks, systemic insecurity in consumer technologies, corporate and governmental encroachments on privacy, and the weaponization of formats once considered harmless. Yet, the emergence of open, privacy-first AI and the creative repurposing of consumer tech illustrate parallel efforts to realign innovation with security and transparency. The result is a complex, high-velocity ecosystem where defensive strategies must adapt as quickly as offensive capabilities evolve. Conclusion

    21 min
  3. SEP 10

    In the Picture with The IT Privacy and Security Weekly Update for the week ending September 9th. 2025

    EP 259  In this week’s update:Affordable LookCam devices, marketed as home security solutions, harbor critical vulnerabilities that could allow strangers to access your private video feeds.VirusTotal uncovers a sophisticated phishing campaign using SVG files to disguise malware, targeting users with fake Colombian judicial portals.Plex alerts users to a data breach compromising emails, usernames, and hashed passwords, urging immediate password resets to secure accounts.Philips Hue’s innovative MotionAware feature transforms smart bulbs into motion sensors, enhancing home automation with cutting-edge RF technology.A massive supply chain attack compromises npm packages, affecting billions of downloads through a phishing scheme targeting maintainers’ accounts.Google faces a $425.7 million verdict for covertly tracking nearly 98 million smartphones, violating user privacy despite opt-out settings.Switzerland’s Apertus, a fully open-source AI model, sets a new standard for privacy, offering transparency and compliance with stringent data laws.An AI-driven tool, Auto Exploit, revolutionizes cybersecurity by generating exploit code in under 15 minutes, reshaping defensive strategies.ICE’s adoption of Paragon’s Graphite spyware, capable of infiltrating encrypted apps, sparking concerns over privacy and surveillance in immigration enforcement.Look closely and perhaps you’ll see it in the picture.

    20 min
  4. SEP 4 · BONUS

    258.5 deep dive. We can see you. The IT Privacy and Security Weekly Update for the week ending September 2nd. 2025

    Modern technology introduces profound privacy and security challenges. Wi-Fi and Bluetooth devices constantly broadcast identifiers like SSIDs, MAC addresses, and timestamps, which services such as Wigle.net and major tech companies exploit to triangulate precise locations. Users can mitigate exposure by appending _nomap to SSIDs, though protections remain incomplete, especially against companies like Microsoft that use more complex opt-out processes. At the global scale, state-sponsored hacking represents an even larger threat. A Chinese government-backed campaign has infiltrated critical communication networks across 80 nations and at least 200 U.S. organizations, including major carriers. These intrusions enabled extraction of sensitive call records and law enforcement directives, undermining global privacy and revealing how deeply foreign adversaries can map communication flows. AI companies are also reshaping expectations of confidentiality. OpenAI now scans user conversations for signs of harmful intent, with human reviewers intervening and potentially escalating to law enforcement. While the company pledges not to report self-harm cases, the shift transforms ChatGPT from a private interlocutor into a monitored channel, raising ethical questions about surveillance in AI systems. Similarly, Anthropic has adopted a new policy to train its models on user data, including chat transcripts and code, while retaining records for up to five years unless users explicitly opt out by a set deadline. This forces individuals to choose between enhanced AI capabilities and personal privacy, knowing that once data is absorbed into training, confidentiality cannot be reclaimed. Research has further exposed the fragility of chatbot safety systems. By crafting long, grammatically poor run-on prompts that delay punctuation, users can bypass guardrails and elicit harmful outputs. This underscores the need for layered defenses input sanitization, real-time filtering, and improved oversight beyond alignment training alone. Security risks also extend into software infrastructure. Widely used tools such as the Node.js library fast-glob, essential to both civilian and military systems, are sometimes maintained by a single developer abroad. While open-source transparency reduces risk, concentration of control in geopolitically sensitive regions raises concerns about potential sabotage, exploitation, or covert compromise. Meanwhile, regulators are tightening defenses against longstanding consumer threats. The FCC will enforce stricter STIR/SHAKEN rules by September 2025, requiring providers to sign calls with their own certificates instead of relying on third parties. Non-compliance could result in fines and disconnection, offering consumers more reliable caller ID and fewer spoofed robocalls. Finally, ethical boundaries around AI and digital identity are being tested. Meta has faced criticism for enabling or creating AI chatbots that mimic celebrities like Taylor Swift and Scarlett Johansson without consent, often producing flirty or suggestive interactions. Rival platforms like X s Grok face similar accusations. Beyond violating policies and reputations, the trend of unauthorized digital doubles including of minors raises serious concerns about exploitation, unhealthy attachments, and reputational harm. Together, these cases reveal a central truth: digital systems meant to connect, entertain, and innovate increasingly blur the lines between utility, surveillance, and exploitation. Users and institutions alike must navigate trade-offs between convenience, capability, and control, while regulators and technologists scramble to impose safeguards in a rapidly evolving landscape.

    20 min
  5. AUG 28 · BONUS

    257.5 Deep Dive. The Super Intelligent IT Privacy and Security Weekly Update for the week ending August 26th 2025

    Organizations today face escalating cyber risks spanning state-sponsored attacks, supply chain compromises, and malicious apps. ShinyHunters’ breaches of Salesforce platforms (impacting Google and Farmers Insurance) show how social engineering—like voice phishing—can exploit trusted vendors. Meanwhile, Russian actors (FSB-linked “Static Tundra”) continue to leverage old flaws, such as a seven-year-old Cisco Smart Install bug, to infiltrate U.S. infrastructure. Malicious apps on Google Play (e.g., Joker, Anatsa) reached millions of downloads before removal, proving attackers’ success in disguising malware. New technologies bring fresh vectors: Perplexity’s Comet browser allowed prompt injection–driven account hijacking, while malicious RDP scanning campaigns exploit timing to maximize credential theft. Responses vary between safeguarding and asserting control. The FTC warns U.S. firms against weakening encryption or enabling censorship under foreign pressure, citing legal liability. By contrast, Russia mandates state-backed apps like MAX Messenger and RuStore, raising surveillance concerns. Microsoft, facing leaks from its bug-sharing program, restricted exploit code access to higher-risk countries. Open-source projects like LibreOffice gain traction as sovereignty tools—privacy-first, telemetry-free, and free of vendor lock-in. AI-powered wearables such as Halo X smart glasses blur lines between utility and surveillance. Their ability to “always listen” and transcribe conversations augments human memory but erodes expectations of privacy. The founders’ history with facial recognition raises additional misuse concerns. As AI integrates directly into conversation and daily life, the risks of pervasive recording, ownership disputes, and surveillance intensify. Platforms like Bluesky are strained by conflicting global regulations. Mississippi’s HB 1126 requires universal age verification, fines for violations, and parental consent for minors. Lacking resources for such infrastructure, Bluesky withdrew service from the state. This illustrates the tension between regulatory compliance, resource limits, and preserving open user access. AI adoption is now a competitive imperative. Coinbase pushes aggressive integration, requiring engineers to embrace tools like GitHub Copilot or face dismissal. With one-third of its code already AI-generated, Coinbase aims for 50% by quarter’s end, supported by “AI Speed Runs” for knowledge-sharing. Yet, rapid adoption risks employee dissatisfaction and AI-generated security flaws, underscoring the need for strict controls alongside innovation. Breaches at Farmers Insurance (1.1M customers exposed) and Google via Salesforce illustrate the scale of third-party risk. Attackers exploit trusted platforms and human error, compromising data across multiple organizations at once. This shows security depends not only on internal defenses but on continuous vendor vetting and monitoring. Governments often demand access that undermines encryption, privacy, and transparency. The FTC warns that backdoors or secret concessions—such as the UK’s (later retracted) request for Apple to weaken iCloud—violate user trust and U.S. law. Meanwhile, Russia’s mandatory domestic apps exemplify sovereignty used for surveillance. Companies face a global tug-of-war between privacy, compliance, and open internet principles. Exploited legacy flaws prove that vulnerabilities never expire. Cisco’s years-old Smart Install bug, still unpatched in many systems, allows surveillance of critical U.S. sectors. Persistent RDP scanning further highlights attackers’ patience and scale. The lesson is clear: proactive patching, continuous updates, and rigorous audits are essential. Cybersecurity demands ongoing vigilance against both emerging and legacy threats.

    19 min
  6. AUG 21

    EP 256.5. Deep Dive. EP 256 The IT Privacy and Security Weekly Update for the Week ending August 19th., 2025 and Something Phishy

    Phishing Training Effectiveness: A study of over 19,000 employees showed traditional phishing training has limited impact, improving scam detection by just 1.7% over eight months. Despite varied training methods, over 50% of participants fell for at least one phishing email, highlighting persistent user susceptibility and the need for more effective cybersecurity education strategies. Cybersecurity Risks in Modern Cars: Modern connected vehicles are highly vulnerable to cyberattacks. A researcher exploited flaws in a major carmaker’s web portal, gaining “national admin” access to dealership data and demonstrating the ability to remotely unlock cars and track their locations using just a name or VIN. This underscores the urgent need for regular vehicle software updates and stronger manufacturer security measures to prevent data breaches and potential vehicle control by malicious actors. Nation-State Cyberattacks on Infrastructure: Nation-state cyberattacks targeting critical infrastructure are escalating. Russian hackers reportedly took control of a Norwegian hydropower dam, releasing water undetected for hours. While no physical damage occurred, such incidents reveal the potential for widespread disruption and chaos, signaling a more aggressive stance by state-sponsored cyber actors and the need for robust infrastructure defenses. AI Regulation in Mental Health Therapy: States like Illinois, Nevada, and Utah are regulating or banning AI in mental health therapy due to safety and privacy concerns. Unregulated AI chatbots risk harmful interactions with vulnerable users and unintended data exposure. New laws require licensed professional oversight and prohibit marketing AI chatbots as standalone therapy tools to protect users. Impact of Surveillance Laws on Privacy Tech: Proposed surveillance laws, like Switzerland’s data retention mandates, are pushing privacy-focused tech firms like Proton to relocate infrastructure. Proton is moving its AI chatbot, Lumo, to Germany and considering Norway for other services to uphold its no-logs policy. This reflects the tension between national security and privacy, driving companies to seek jurisdictions with stronger data protection laws. Data Brokers and Privacy Challenges: Data brokers undermine consumer privacy despite laws like California’s Consumer Privacy Act. Over 30 brokers were found hiding data deletion instructions from Google search results using specific code, creating barriers for consumers trying to opt out of data collection. This intentional obfuscation frustrates privacy rights and weakens legislative protections. Android pKVM Security Certification: Android’s protected Kernel-based Virtual Machine (pKVM) earned SESIP Level 5 certification, the first software security solution for consumer electronics to achieve this standard. Designed to resist sophisticated attackers, pKVM enables secure handling of sensitive tasks like on-device AI processing, setting a new benchmark for consistent, verifiable security across Android devices. VPN Open-Source Code Significance: VP.NET’s decision to open-source its Intel SGX enclave code on GitHub enhances transparency in privacy technology. By allowing public verification, users can confirm the code running on servers matches the open-source version, fostering trust and accountability. This move could set a new standard for the VPN and privacy tech industry, encouraging others to prioritize verifiable privacy claims.

    18 min
4.5
out of 5
4 Ratings

About

Into year five for this award-winning, light-hearted, lightweight IT privacy and security podcast that spans the globe in terms of issues covered with topics that draw in everyone from executive, to newbie, to tech specialist. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current IT privacy and security stories from around the world to help you improve the management of your own privacy and security.

You Might Also Like