Privacy Navigator: Weekly Insights on Privacy, AI, and Compliance

Elislav Atanasov

Stay ahead in the fast-paced world of privacy and artificial intelligence with The Privacy Navigator Podcast. Each episode delivers the latest news, regulations, case law, and guidelines, ensuring you're always informed about the evolving privacy landscape. Designed for busy privacy professionals, our deep dives cut through the noise to bring you the essential trends and issues shaping the industry today. Whether you're managing ROPAs, DSARs, vendors, or policies, we've got you covered with expert insights and practical advice. Brought to you by Conformally (https://conformally.com/ ), the a

Episodes

  1. 29 MAY

    2025-W22 Replica with EUR 5M Fine, Meta Wins Big, EU Commision Indecisive

    Garante Slams Replika with a EUR 5M Fine The Italian Data Protection Authority (Garante) has imposed significant corrective measures, including EUR 5M fine and a potential ban on processing Italian users' data, against Luka Inc., the company behind the AI chatbot Replika. According to the decision, the Garante found multiple GDPR breaches: Lack of Legal Basis: Particularly for processing sensitive data inferred from user conversations, including emotional and health-related information (violating Articles 6 and 9). Transparency Failures: Insufficient information provided to users about how their data, especially chat content, would be used for training AI models (Article 13). Risks to Minors: Inadequate age verification systems, leading to the unlawful processing of children's data (Article 8). No DPIA: Failure to conduct a Data Protection Impact Assessment for what is clearly high-risk processing activity (Article 35). Data Protection by Design/Default Deficiencies: Principles of Article 25 not adequately implemented. The "black box" nature of some AI models won't fly if the fundamentals of GDPR – legal basis, transparency, risk assessment, and data protection by design – are not robustly addressed from the outset. For AI companions and similar services, inferred data is increasingly seen as sensitive, requiring explicit consent. Meta Pushes Ahead with EU User Data for AI Training This is the first time we report privacy news in favour of Meta. It’s odd. It seems that legitimate interest could be the way to go, after all, for AI training First, the Cologne Higher Regional Court in Germany made a significant ruling concerning Meta's use of publicly available user data for training its artificial intelligence systems. The court found that Meta's actions were lawful under Article 6(1)(f) of the General Data Protection Regulation (GDPR). The court recognized Meta's interest in training its AI as a legitimate aim. A key point in the ruling was the acknowledgement that training effective AI models requires vast quantities of data. Additionally, Meta has signaled its intention to train its AI with user data to the Irish DPC, which is the leading DPA. Again, Meta is expected to rely on "legitimate interests" (Article 6(1)(f) GDPR) as the legal basis for this processing. The Irish DPC issued a statement confirming it is engaging with Meta on these plans. Using opt-out for AI training data is raises many questions. Once data is ingested and used to train a foundational model, can it truly be "unlearned" or its influence fully erased if a user objects later? How to opt out? If you haven't already, here is how to opt out from Meta using your personal data for AI training. Here’s the direct link to submit your request to Meta. If for some reason the link doesn't work make sure to go to Privacy > Privacy Center > Privacy Topics > Submit an objection request You will have to do the same for each social media platform you use... Yes, it's infuriating. It's called malicous compliance. EU Commision Suggests EU AI Act Pause and GDPR Simplification While the EU AI Act is formally adopted and its phased entry into force continues, the path to full practical implementation is hitting some turbulence. Recent reports indicate that the development of harmonized technical standards, which are vital for companies to demonstrate compliance for high-risk AI systems, is taking longer than initially anticipated, with some now expected in 2026. Similarly, the Code of Practice for General-Purpose AI (GPAI) models has faced pushback and delays in finalization. Separately, but related to the AI ecosystem, on May 21, 2025, the European Commission announced a series of simplification measures aimed at reducing administrative burdens and cutting red tape for EU businesses, particularly Small and Medium-sized Enterprises (SMEs).

    24 min
  2. 22 MAY

    2025-W21 nyob vs Meta, Google with 1.3 Billion Settlement and Deepfakes law

    AI Training & Privacy: nyob vs Meta Privacy advocacy group noyb (none of your business) has issued a "cease and desist" letter to Meta's Irish headquarters, threatening a class action lawsuit if the tech giant proceeds with its plan to train its AI models using EU user data without explicit opt-in consent. Meta's intention, set for May 27, 2025, is to use public data shared by adults across Facebook and Instagram for AI training, relying on an alleged "legitimate interest" under GDPR. Noyb argues that this "opt-out" approach is a clear violation of GDPR, which generally requires explicit consent for such extensive data processing, especially for AI training. They highlight that even if a small percentage of users opt-in, it would still provide Meta with vast amounts of data to learn EU languages and cultural references. Max Schrems, noyb's founder, stated that Meta's claim of "legitimate interest" is "neither legal nor necessary" and "laughable." This isn't the first time Meta has faced scrutiny over its reliance on "legitimate interest," as they were previously forced to shift to a consent-based approach for targeted advertising in the EU in 2023. Noyb also raises concerns about Meta's ability to technically differentiate between users who opt-out and those who don't, and the lack of clarity or approval from national data protection authorities. Texas vs. Google: A $1.375 Billion Privacy Win Texas Attorney General Ken Paxton announced a landmark $1.375 billion settlement with Google, resolving lawsuits alleging that Google illegally tracked and collected Texans' personal data without their consent. This record-breaking settlement is the largest ever secured by a state attorney general against Google for data privacy violations. The lawsuit, filed in 2022, accused Google of secretly tracking users' movements, private searches, and even voiceprints and facial geometry through its products and services. Paxton emphasized that "Big Tech is not above the law" and that the settlement sends a clear message that companies will be held accountable for abusing public trust. Google stated that the agreement settles various "old claims" related to product policies they have already changed and does not require any additional product changes. While a $1.375 billion settlement sounds substantial, it's crucial to look beyond the headline. Google, as is common in such settlements, admitted no wrongdoing. This allows them to avoid setting a legal precedent that could have wider implications. The fact that Google claims it doesn't need to make "any additional product changes" is also telling. This suggests that the financial penalty, while large, might be more of a cost of doing business rather than a catalyst for fundamental shifts in data collection practices. The "Take It Down Act" Signed into Law: A New Era for Deepfake Regulation President Trump recently signed the "Take It Down Act" (officially, the "Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act") into law. Championed by Melania Trump, this bipartisan bill addresses the non-consensual online publication of intimate visual depictions, explicitly covering AI deepfakes. Key provisions include: Prohibition & Penalties: Criminalizes the non-consensual online publication of intimate visual depictions (both authentic and computer-generated, termed "digital forgeries") with mandatory restitution and criminal penalties (prison, fine, or both). Threats to publish such depictions are also prohibited Platform Responsibilities: Requires "covered platforms" (public websites, online services, or applications primarily providing a forum for user-generated content) to establish a process for individuals to report and request removal of such content. Platforms must remove the content within 48 hours of notification. Learn more at https://conformally.com/privacy-navigator

    11 min
  3. 22 MAY

    2025-W04 Navigating the Pseudonymisation Guidelines

    Pseudonymisation is a multifunctional tool helping us comply with many GDPR provisions and principles. Pseudonymisation should always be considered in the context of ROPAs. It’s not something that is simply stated in some document. “We are protecting data by using pseudonymisation” is definetly not enough. Lastly, the framework below is the result of me summarising the examples from the Annex of the Guidelines. It’s not something I came up with on my own. Context and Purpose of Processing Start by understanding the purpose of processing and documenting it in your Record of Processing Activities (ROPA). Example Context: A hospital conducts a clinical study involving patient health records to analyze treatment efficacy. Purpose of Processing: The hospital needs to process sensitive health data while ensuring compliance with GDPR and minimizing privacy risks to participants. What Problem is to Be Solved? Define your goal for pseudonymisation. Are you aiming to rely on legitimate interests for processing, meet the privacy by design/default principle, or both? Objective: Protect patient privacy by pseudonymising health records to reduce re-identification risks while enabling researchers to use the data for analysis. Compliance Goal: Fulfill the privacy by default principle while maintaining the utility of the data. Original Data Describe the personal data you are starting with before applying pseudonymisation. Example Original Data: Patient names Addresses Date of birth Health conditions Treatment history Pseudonymised Domain Define who will process the pseudonymised data and in what capacity. Example: Researchers analyzing the dataset. They will work with pseudonymised data and will not have access to the additional information required for re-identification. Pseudonymised Data Describe the data after pseudonymisation. Example Pseudonymised Data: Names are replaced with random identifiers (e.g., “Patient_001”). Addresses and dates of birth are generalized (e.g., replacing “01/21/1980” with “1980”). Health conditions and treatment history remain unchanged for research purposes but are no longer directly linked to individuals. Additional Information Explain how you will implement pseudonymisation, detailing the method used. Method: Use a lookup table to replace names with pseudonyms. Example: Store the mapping of “Patient_001” = “John Doe” in a secure, access-controlled database. Optionally, encrypt sensitive fields (e.g., addresses) using AES encryption. Storage: Keep the lookup table and encryption keys in a physically and logically separate system, accessible only to authorized personnel. Processing of Pseudonymised Data Describe how the pseudonymised data will be used. Example Use Case: Researchers access pseudonymised health data for statistical analysis. The pseudonyms (e.g., “Patient_001”) are sufficient for their work and do not allow them to identify specific individuals. Pseudonymisation Process Detail the steps taken to pseudonymise the data. Step 1: Extract relevant data fields from the original dataset. Step 2: Replace direct identifiers (e.g., names) with pseudoyms using a secure, randomized algorithm. Step 3: Encrypt sensitive indirect identifiers (e.g., addresses) using crptographic methods. Step 4: Store the mapping of original identifiers to pseudonyms (lookup table) and encryption keys in separate secure locations. Step 5: Provide the pseudonymised dataset to researchers for processing. Additional Safeguards Identify safeguards specific to this scenario to further protect the pseudonymised data. Access Controls: Strictly limit access to the lookup table and encryption keys. Separation of Duties: Ensure that only administrative staff can access the lookup table, while researchers handle only pseudonymised data. Auditing: Regularly monitor and log access to both the lookup table and the pseudonymised dataset. Minimization: Only share the minimum data necessary for the research objective.

    14 min
  4. 22 MAY

    2025-W03 The Austrian DSB Slaps Down Google’s Controllership Denial

    The Austrian DSB Slaps Down Google’s Controllership Denial A data subject submitted a Data Subject Access Request (DSAR) directly to Google LLC, demanding access to their personal data under GDPR. Google LLC dodged responsibility, passing the request off to Google Ireland Ltd., claiming the latter was the sole controller for EEA and Swiss operations. This triggered an investigation by the Austrian DSB, who didn’t buy Google LLC’s claim that they were just a bystander. Evidence uncovered showed Google LLC wasn’t just “helping out” — they were the master mind behind key data processing decisions. Why Google LLC Can’t Escape Being a Controller? Let’s be clear — the DSB saw right through Google LLC’s attempt to paint themselves as a processor. Google LLC sets the tone for product development, infrastructure, and the rules of the game for how personal data is handled globally. That’s textbook controllership. DSARs Are a Controller’s Problem, Period. Here’s the deal: GDPR Article 4(7) says controllers are responsible for everything—from why data is collected to what’s done with it. And under Articles 12–23, responding to DSARs is non-negotiable. By directing data processing globally, Google LLC effectively made themselves accountable for these requests. What nailed Google LLC? They control the playbook for EEA processing. They design the systems that collect and process personal data. Their contracts with Google Ireland Ltd. didn’t effectively hand off responsibilities. In short, the DSB ruled: “You can’t be this involved and not call yourself a controller.” Signs You’re a Controller (Even If You Deny It): You decide what data gets collected and why. You build the systems and infrastructure for processing. You set the rules — from storage to security to compliance. You enforce standards across global operations. You call the shots when it comes to how personal data is used, shared, or accessed.

    11 min
  5. 22 MAY

    2024-W50 UK Data Use and Access Bill Updates

    UK Data Use and Access Bill Updates Seven consortia have been selected to establish AI Factories across Europe. Brazilian AI Act On the Way UK Data Use and Access Bill Updates The UK government is proposing the Data Use and Access Bill to modernise data protection regulations. The bill seeks to balance data processing benefits with user privacy and has received positive feedback from the Information Commissioner’s Office (ICO). It impacts sectors like health and finance, promoting data sharing in research while clarifying consent procedures. The bill addresses the use of automated decision-making technology (ADMT) in AI, granting individuals the right to challenge decisions made by AI systems. The proposed reforms would restructure the ICO, granting it additional enforcement resources and responsibilities related to technology innovation and public safety. The ICO would gain powers to investigate data protection compliance and security incidents, potentially requiring organisations to provide technical reports. The bill emphasises the responsible and careful handling of personal data in the context of AI and data breaches. Read more here Seven consortia have been selected to establish AI Factories across Europe Seven consortia have been selected to establish AI Factories across Europe. The tldr: These factories aim to boost AI innovation and will receive €1.5 billion in funding, split equally between the EU and national sources. They will be hosted in research hubs across Europe, including Barcelona, Bologna, Kajaani, Bissen, Linköping, Stuttgart and Athens. The AI factories will provide access to computing power, data, and talent necessary for AI development. Their focus is on developing large language models and specialised vertical AI models for various sectors. The next opportunity for Member States to submit proposals for new AI factories is 1 February 2025. Read more here Brazil is on its way to regulate AI. Here is what you need to know: The Brazilian Senate will vote in two days on a bill to regulate Artificial Intelligence (AI). The bill defines AI systems similarly to the EU AI Act. It outlines rights for people affected by AI, drawing inspiration from GDPR principles. A risk-based approach is adopted, prohibiting certain AI systems deemed to be excessively risky. We quite liked Luiza Jarovski’s take on this, see it here

    15 min

About

Stay ahead in the fast-paced world of privacy and artificial intelligence with The Privacy Navigator Podcast. Each episode delivers the latest news, regulations, case law, and guidelines, ensuring you're always informed about the evolving privacy landscape. Designed for busy privacy professionals, our deep dives cut through the noise to bring you the essential trends and issues shaping the industry today. Whether you're managing ROPAs, DSARs, vendors, or policies, we've got you covered with expert insights and practical advice. Brought to you by Conformally (https://conformally.com/ ), the a