The IT Privacy and Security Weekly Update.

R. Prescott Stearns Jr.

Into year five for this award-winning, light-hearted, lightweight IT privacy and security podcast that spans the globe in terms of issues covered with topics that draw in everyone from executive, to newbie, to tech specialist. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current IT privacy and security stories from around the world to help you improve the management of your own privacy and security.

  1. 4 DAYS AGO · BONUS

    257.5 Deep Dive. The Super Intelligent IT Privacy and Security Weekly Update for the week ending August 26th 2025

    Organizations today face escalating cyber risks spanning state-sponsored attacks, supply chain compromises, and malicious apps. ShinyHunters’ breaches of Salesforce platforms (impacting Google and Farmers Insurance) show how social engineering—like voice phishing—can exploit trusted vendors. Meanwhile, Russian actors (FSB-linked “Static Tundra”) continue to leverage old flaws, such as a seven-year-old Cisco Smart Install bug, to infiltrate U.S. infrastructure. Malicious apps on Google Play (e.g., Joker, Anatsa) reached millions of downloads before removal, proving attackers’ success in disguising malware. New technologies bring fresh vectors: Perplexity’s Comet browser allowed prompt injection–driven account hijacking, while malicious RDP scanning campaigns exploit timing to maximize credential theft. Responses vary between safeguarding and asserting control. The FTC warns U.S. firms against weakening encryption or enabling censorship under foreign pressure, citing legal liability. By contrast, Russia mandates state-backed apps like MAX Messenger and RuStore, raising surveillance concerns. Microsoft, facing leaks from its bug-sharing program, restricted exploit code access to higher-risk countries. Open-source projects like LibreOffice gain traction as sovereignty tools—privacy-first, telemetry-free, and free of vendor lock-in. AI-powered wearables such as Halo X smart glasses blur lines between utility and surveillance. Their ability to “always listen” and transcribe conversations augments human memory but erodes expectations of privacy. The founders’ history with facial recognition raises additional misuse concerns. As AI integrates directly into conversation and daily life, the risks of pervasive recording, ownership disputes, and surveillance intensify. Platforms like Bluesky are strained by conflicting global regulations. Mississippi’s HB 1126 requires universal age verification, fines for violations, and parental consent for minors. Lacking resources for such infrastructure, Bluesky withdrew service from the state. This illustrates the tension between regulatory compliance, resource limits, and preserving open user access. AI adoption is now a competitive imperative. Coinbase pushes aggressive integration, requiring engineers to embrace tools like GitHub Copilot or face dismissal. With one-third of its code already AI-generated, Coinbase aims for 50% by quarter’s end, supported by “AI Speed Runs” for knowledge-sharing. Yet, rapid adoption risks employee dissatisfaction and AI-generated security flaws, underscoring the need for strict controls alongside innovation. Breaches at Farmers Insurance (1.1M customers exposed) and Google via Salesforce illustrate the scale of third-party risk. Attackers exploit trusted platforms and human error, compromising data across multiple organizations at once. This shows security depends not only on internal defenses but on continuous vendor vetting and monitoring. Governments often demand access that undermines encryption, privacy, and transparency. The FTC warns that backdoors or secret concessions—such as the UK’s (later retracted) request for Apple to weaken iCloud—violate user trust and U.S. law. Meanwhile, Russia’s mandatory domestic apps exemplify sovereignty used for surveillance. Companies face a global tug-of-war between privacy, compliance, and open internet principles. Exploited legacy flaws prove that vulnerabilities never expire. Cisco’s years-old Smart Install bug, still unpatched in many systems, allows surveillance of critical U.S. sectors. Persistent RDP scanning further highlights attackers’ patience and scale. The lesson is clear: proactive patching, continuous updates, and rigorous audits are essential. Cybersecurity demands ongoing vigilance against both emerging and legacy threats.

    19 min
  2. 21 AUG

    EP 256.5. Deep Dive. EP 256 The IT Privacy and Security Weekly Update for the Week ending August 19th., 2025 and Something Phishy

    Phishing Training Effectiveness: A study of over 19,000 employees showed traditional phishing training has limited impact, improving scam detection by just 1.7% over eight months. Despite varied training methods, over 50% of participants fell for at least one phishing email, highlighting persistent user susceptibility and the need for more effective cybersecurity education strategies. Cybersecurity Risks in Modern Cars: Modern connected vehicles are highly vulnerable to cyberattacks. A researcher exploited flaws in a major carmaker’s web portal, gaining “national admin” access to dealership data and demonstrating the ability to remotely unlock cars and track their locations using just a name or VIN. This underscores the urgent need for regular vehicle software updates and stronger manufacturer security measures to prevent data breaches and potential vehicle control by malicious actors. Nation-State Cyberattacks on Infrastructure: Nation-state cyberattacks targeting critical infrastructure are escalating. Russian hackers reportedly took control of a Norwegian hydropower dam, releasing water undetected for hours. While no physical damage occurred, such incidents reveal the potential for widespread disruption and chaos, signaling a more aggressive stance by state-sponsored cyber actors and the need for robust infrastructure defenses. AI Regulation in Mental Health Therapy: States like Illinois, Nevada, and Utah are regulating or banning AI in mental health therapy due to safety and privacy concerns. Unregulated AI chatbots risk harmful interactions with vulnerable users and unintended data exposure. New laws require licensed professional oversight and prohibit marketing AI chatbots as standalone therapy tools to protect users. Impact of Surveillance Laws on Privacy Tech: Proposed surveillance laws, like Switzerland’s data retention mandates, are pushing privacy-focused tech firms like Proton to relocate infrastructure. Proton is moving its AI chatbot, Lumo, to Germany and considering Norway for other services to uphold its no-logs policy. This reflects the tension between national security and privacy, driving companies to seek jurisdictions with stronger data protection laws. Data Brokers and Privacy Challenges: Data brokers undermine consumer privacy despite laws like California’s Consumer Privacy Act. Over 30 brokers were found hiding data deletion instructions from Google search results using specific code, creating barriers for consumers trying to opt out of data collection. This intentional obfuscation frustrates privacy rights and weakens legislative protections. Android pKVM Security Certification: Android’s protected Kernel-based Virtual Machine (pKVM) earned SESIP Level 5 certification, the first software security solution for consumer electronics to achieve this standard. Designed to resist sophisticated attackers, pKVM enables secure handling of sensitive tasks like on-device AI processing, setting a new benchmark for consistent, verifiable security across Android devices. VPN Open-Source Code Significance: VP.NET’s decision to open-source its Intel SGX enclave code on GitHub enhances transparency in privacy technology. By allowing public verification, users can confirm the code running on servers matches the open-source version, fostering trust and accountability. This move could set a new standard for the VPN and privacy tech industry, encouraging others to prioritize verifiable privacy claims.

    18 min
  3. 20 AUG

    The IT Privacy and Security Weekly Update for the Week ending August 19th., 2025 and ... Something Phishy

    EP 256. Freshly Phished this week...A study with thousands of test subjects showed phishing training has minimal impact on scam detection. The results are surprisingly underwhelming.A hacker exploited a carmaker’s web portal to access customer data and unlock vehicles remotely. The breach exposed major vulnerabilities.Russian hackers took control of a Norwegian dam, releasing water undetected for hours. The cyber-attack raises serious concerns and water levels.Illinois banned AI in mental health therapy, joining states regulating chatbots. The move addresses the growing safety concerns of AI and its crazy responses.Proton is relocating infrastructure from Switzerland due to proposed surveillance laws. The privacy-focused firm is taking bold steps and getting closer to the source of rakfisk.Data brokers are evading California’s privacy laws by concealing opt-out pages. This tactic blocks consumers from protecting their data.Android’s pKVM earned elite SESIP Level 5 security certification for virtual machines. The technology sets a new standard for device security, but what does it mean and what does it do?The UK abandoned its push to force Apple to unlock iCloud backups after privacy disputes. The decision followed intense negotiations with the U.S..VP.NET released its source code for public verification, enhancing trust in privacy tech. A move that sets a new transparency benchmark.​Let's hit the water! Find the full transcript to the podcast here.

    19 min
  4. 14 AUG

    EP 255.5 Deep Dive. Sweet Thing and The IT Privacy and Security Weekly Update for the Week ending August 12th., 2025

    How AI Can Inadvertently Expose Personal Data AI tools often unintentionally leak private information. For example, meeting transcription software can include offhand comments, personal jokes, or sensitive details in auto-generated summaries. ChatGPT conversations—when publicly shared—can also be indexed by search engines, revealing confidential topics such as NDAs or personal relationship issues. Even healthcare devices like MRIs and X-ray machines have exposed private data due to weak or absent security controls, risking identity theft and phishing attacks. Cybercriminals Exploiting AI for Attacks AI is a double-edged sword: while offering defensive capabilities, it's also being weaponized. The group “GreedyBear” used AI-generated code in a massive crypto theft operation. They deployed malicious browser extensions, fake websites, and executable files to impersonate trusted crypto platforms, harvesting users’ wallet credentials. Their tactic involves publishing benign software that gains trust, then covertly injecting malicious code later. Similarly, AI-generated TikTok ads lead to fake “shops” pushing malware like SparkKitty spyware, which targets cryptocurrency users. Security Concerns with Advanced AI Models like GPT-5 Despite advancements, new AI models such as GPT-5 remain vulnerable. Independent researchers, including NeuralTrust and SPLX, were able to bypass GPT-5's safeguards within 24 hours. Methods included multi-turn “context smuggling” and text obfuscation to elicit dangerous outputs like instructions for creating weapons. These vulnerabilities suggest that even the latest models lack sufficient security maturity, raising concerns about their readiness for enterprise use. AI Literacy and Education Initiatives There is a growing push for AI literacy, especially in schools. Microsoft has pledged $4 billion to fund AI education in K–12 schools, community colleges, and nonprofits. The traditional "Hour of Code" is being rebranded as "Hour of AI," reflecting a shift from learning to code to understanding AI itself. The aim is to empower students with foundational knowledge of how AI works, emphasizing creativity, ethics, security, and systems thinking over rote programming. Legal and Ethical Issues Around Posthumous Data Use One emerging ethical challenge is the use of deceased individuals' data to train AI models. Scholars advocate for postmortem digital rights, such as a 12-month grace period for families to delete a person’s data. Currently, U.S. laws offer little protection in this area, and acts like RUFADAA don’t address AI recreations. Encryption Weaknesses in Law Enforcement and Critical Systems Recent research highlights significant encryption vulnerabilities in communication systems used by police, military, and critical infrastructure. A Dutch study uncovered a deliberate backdoor in a radio encryption algorithm. Even the updated, supposedly secure version reduces key strength from 128 bits to 56 bits—dramatically weakening security. This suggests that critical communications could be intercepted, leaving sensitive systems exposed despite the illusion of protection. Public Trust in Government Digital Systems Trust in digital governance is under strain. The UK’s HM Courts & Tribunals Service reportedly concealed an IT error that caused key evidence to vanish in legal cases. The lack of transparency and inadequate investigation risk undermining judicial credibility. Separately, the UK government secretly authorized facial recognition use across immigration databases, far exceeding the scale of traditional criminal databases. AI for Cybersecurity Defense On the defensive side, AI is proving valuable in finding vulnerabilities. Google’s “Big Sleep,” an LLM-based tool developed by DeepMind and Project Zero, has independently discovered 20 bugs in major open-source projects like FFmpeg and ImageMagick.

    13 min
  5. 13 AUG

    Sweet Thing and The IT Privacy and Security Weekly Update for the Week ending August 12th., 2025

    EP 255   For this week's sweet update  we start with AI tools that are quietly transcribing your meetings, but what happens when your offhand jokes end up in the wrong hands? Discover how casual chats are being exposed in automated summaries.Your ChatGPT conversations might be popping up in Google searches, revealing everything from NDAs to personal struggles. Uncover the scale of this privacy breach and what it means for you.Fake TikTok shops are luring shoppers with AI-crafted ads, hiding a sinister malware trap. Dive into the world of counterfeit domains stealing crypto and credentials.MRI scans and X-rays are leaking online from over a million unsecured healthcare devices. Find out how your medical secrets could be exposed to hackers worldwide.Security teams cracked GPT-5’s defenses in hours, turning it into a tool for dangerous outputs. Explore how this AI’s vulnerabilities could spell trouble for enterprise users.A slick AI-driven crypto heist stole millions through fake browser extensions and scam sites. Learn how GreedyBear’s cunning tactics are redefining cybercrime.A secret IT glitch in UK courts has been wiping out evidence, leaving judges in the dark. Delve into the cover-up shaking trust in the justice system.UK police are scanning passport photos with facial recognition, all without public knowledge. Unravel the hidden expansion of surveillance using your personal images.Come on!  Let's raise those glucose levels. Find the full transcript to this podcast here.

    18 min
  6. 7 AUG

    EP 254.5 Deep Dive Tea for Six Point Two and the IT Privacy and Security Weekly Update for the Week Ending August 5th., 2025

    1. Scrutiny of the "Tea" Dating App The women-focused dating app "Tea" faces backlash after two data breaches exposed 72,000 sensitive images and 1.1 million private messages. Though security upgrades were promised, past data remained exposed, and the app lacks end-to-end encryption. Additionally, anonymous features enabling posts about men have sparked defamation lawsuits. Critics argue Tea prioritized rapid growth over user safety, exemplifying the danger of neglecting cybersecurity in pursuit of scale. 2. North Korean Remote Work Infiltration CrowdStrike has flagged a 220% surge in North Korean IT operatives posing as remote workers—over 320 cases in the past year. These operatives use stolen/fake identities, aided by generative AI to craft résumés, deepfake interviews, and juggle multiple jobs. Their earnings fund Pyongyang’s weapons programs. The tactic reveals the limits of traditional vetting and the need for advanced hiring security. 3. Airportr's Data Exposure UK luggage service Airportr suffered a major security lapse exposing passport photos, boarding passes, and flight details—including those of diplomats. CyberX9 found it possible to reset accounts with just an email and no limits on login attempts. Attackers could gain admin access, reroute luggage, or cancel flights. Although patched, the incident underscores risks of convenience services with poor security hygiene. 4. Risks of AI-Generated Code Veracode's "2025 GenAI Code Security Report" found that nearly 45% of AI-generated code across 80 tasks had security flaws—many severe. This highlights the need for human oversight and thorough reviews. While AI speeds development, it also increases vulnerability if unchecked, making secure coding a human responsibility. 5. Microsoft's SharePoint Hack Controversy Chinese state hackers exploited flaws in SharePoint, breaching hundreds of U.S. entities. A key concern: China-based Microsoft engineers maintained the hacked software, potentially enabling earlier access. Microsoft also shared vulnerability data with Chinese firms through its MAPP program, while Chinese law requires such data be reported to the state. This raises alarms about outsourcing sensitive software to geopolitical rivals. 6. Russian Embassy Surveillance Attack Russia’s "Secret Blizzard" hackers used ISP-level surveillance to deliver fake Kaspersky updates to embassies. These updates installed malware and rogue certificates enabling adversary-in-the-middle attacks—allowing full decryption of traffic. The attack shows the threat of state-level manipulation of software updates and underscores the need for update authenticity verification. 7. Signal’s Threat to Exit Australia Signal may pull out of Australia if forced to weaken encryption. ASIO’s push for access contradicts Signal's end-to-end encryption model, which can’t accommodate backdoors without global compromise. This standoff underscores a broader debate: encryption must be secure for all or none. Signal’s resistance reflects the rising tension between privacy advocates and governments demanding access. 8. Los Alamos Turns to AI Los Alamos National Laboratory has launched a National Security AI Office, signaling a pivot from nuclear to AI capabilities. With massive GPU infrastructure and university partnerships, the lab sees AI as the next frontier in scientific and national defense. This reflects a shift in global security dynamics—where large language models may be as strategically vital as missiles.

    18 min
  7. 6 AUG

    Tea for Six Point Two with the IT Privacy and Security Weekly Update for the Week Ending August 5th., 2025

    EP 254.  In this week's update:Despite back-to-back data breaches and legal blowback, women are still queuing up by the millions for Tea.  This is one hot dating app that's apparently more viral than secure.North Korean IT operatives are clocking into remote jobs worldwide. Fueled by GenAI and fake identities in what CrowdStrike calls a daily cybersecurity crisis.A British luggage startup managed to lose more than just bags. Airportr briefly exposed diplomatic travel data and full backend access to anyone with a browser and curiosity.According to Veracode, nearly half of all AI-generated code is insecure. And that should leave you feeling insecure, especially if your code reviews have been neglectedMicrosoft confirmed Chinese engineers have long supported the same SharePoint software recently hacked by Beijing.  The breach hit hundreds of U.S. institutions—including nuclear and homeland security.Russian state hackers tricked foreign embassies into installing fake updates from “Kaspersky.”  The malware came with a rogue root certificate—and full surveillance capabilities.Signal’s president warned it might pull out of Australia over demands to weaken encryption. The country’s privacy pushback continues—and secure apps are packing their bags.Los Alamos is pouring resources into AI research—because in 2025, the most powerful weapon might be a large language model, rather than a missile.Finish that cuppa, we have a lot to cover! Find the full transcript to this podcast here.

    18 min

About

Into year five for this award-winning, light-hearted, lightweight IT privacy and security podcast that spans the globe in terms of issues covered with topics that draw in everyone from executive, to newbie, to tech specialist. Your investment of between 15 and 20 minutes a week will bring you up to speed on half a dozen current IT privacy and security stories from around the world to help you improve the management of your own privacy and security.

You Might Also Like