The EU Court of Justice Clarifies the Limits of Pseudonymization: Data Remains Personal for Those Who Hold the Key
The EDPS/SRB Case: When Pseudonymized Opinions Remain Personal Data
The recent Judgment of the Court of Justice of the European Union (First Chamber) of 4 September 2025 in the Case C-413/23 P of 4 September 2025 provides essential clarifications on the nature of pseudonymized data and transparency obligations for data controllers, with significant implications for anyone managing data sharing processes with third parties.
The Factual Context
The case arose from the compensation procedure managed by the Single Resolution Board (SRB) for shareholders of Banco Popular Español following its resolution in 2017. The SRB had collected comments from interested parties through online forms, assigning each a unique alphanumeric code. Subsequently, it transmitted over 1,100 pseudonymized comments to Deloitte for evaluation, while retaining the “key” to link the codes to the authors’ identities.
“In October and December 2019, the affected shareholders and creditors who had responded to the form submitted five complaints to the EDPS under Regulation 2018/1725”. They claimed not to have been informed that their data would be shared with third parties, in violation of Article 15 of Regulation 2018/1725.
Legal Principles Established by the Court
1. Personal Opinions Are Inherently Personal Data
The Court established a fundamental principle: personal opinions and viewpoints are inherently linked to the person expressing them and therefore constitute personal data, without requiring further analysis of the content, purpose, or effects. That represents an expansive interpretation of the concept of personal data, which strengthens the protection of data subjects.
2. The “Relative” Nature of Pseudonymized Data
The most innovative aspect of the ruling concerns the nature of pseudonymized data. The Court clarifies that:
- For the data controller who holds the re-identification key, pseudonymized data always remains personal data
- For third-party recipients who do not have access to the key, the same data may not constitute personal data, but only if technical and organizational measures effectively prevent any possibility of re-identification
This interpretation introduces a “relative” conception of personal data: the same information can simultaneously be considered personal data for one entity and not for another.
3. Transparency Obligations Prevail Over Pseudonymization
Crucial aspect: the obligation to inform data subjects about data recipients (Art. 15 Reg. 2018/1725) must be assessed:
- At the time of data collection
- From the perspective of the data controller
It is irrelevant that the data, after pseudonymization, may lose its personal character for the recipient. The controller must always inform data subjects in advance about possible sharing, even if it intends to pseudonymize the data before transfer.
Pseudonymization vs. Anonymization: Fundamental Differences
The ruling provides an opportunity to reflect on the fundamental distinction between these two concepts, often confused in practice:
Pseudonymization
- Definition: processing that prevents the attribution of data to a specific data subject without the use of additional information kept separately
- Reversibility: the process is reversible for those who hold the key
- Legal status: data remains personal at least for the data controller
- GDPR application: applies in full
- Example: replacing names with alphanumeric codes while maintaining a correspondence table
Anonymization
- Definition: process that makes it impossible to identify the data subject by any reasonably available means
- Irreversibility: the process must be irreversible for everyone
- Legal status: data ceases to be personal
- GDPR application: no longer applies
- Example: statistical aggregation that prevents any individual re-identification
Practical Implications for Organizations
This ruling has significant consequences for those handling personal data:
Due diligence in data sharing: before sharing pseudonymized data, verify whether the recipient could reasonably re-identify data subjects through other means
Enhanced transparency: privacy notices must always indicate all potential data recipients, regardless of the protection techniques intended to be applied
Documentation of technical measures: it is essential to document and demonstrate the effectiveness of implemented pseudonymization measures
Case-by-case assessment: there are no universal solutions; each transfer of pseudonymized data requires a specific evaluation of context and risks
Artificial Intelligence Meets Pseudonymization: What Changes with the AI Act
The Court’s ruling takes on an even more significant dimension when read in light of the AI Act (Regulation EU 2024/1689), which came into force in August 2024. The new European regulation on artificial intelligence sets particularly stringent requirements for systems processing special categories of data, and the Court’s interpretation of the “relative” nature of pseudonymized data directly impacts many AI applications considered high-risk.
The Biometric Data Challenge in the AI Era
Consider the case of biometric identification systems, which are increasingly widespread in corporate and public settings. When these systems pseudonymize biometric templates - those mathematical fingerprints of our faces or voices - the ruling reminds us of an uncomfortable truth: for the system provider maintaining the decoding key, that data always remains sensitive personal data. It doesn’t matter how sophisticated the privacy-preserving techniques used are, whether biometric hashing or homomorphic encryption: the transparency obligation toward data subjects remains fully intact.
But there’s more. When these templates are shared with third parties, such as cloud providers hosting the systems or technology partners providing specialized components, the situation becomes even more complex. Only if the implemented technical measures truly prevent any possibility of re-identification can this data lose its personal character for these third parties. A chance that, as the ruling demonstrates, is far from guaranteed.
The Minefield of Emotion Recognition
Even more delicate is the issue of emotion recognition systems, which the AI Act expressly prohibits in the workplace and educational contexts, with limited exceptions. Here, the ruling takes on decisive weight: even when facial expressions or voice parameters are pseudonymized in training datasets, their nature as personal data doesn’t vanish. Inferences about a person’s emotional state - such as the micro-expression revealing stress or the vocal modulation betraying anxiety - remain inextricably linked to the individual who generated them.
That means companies developing emotion AI systems cannot hide behind pseudonymization to circumvent the AI Act’s prohibitions. In workplaces, pseudonymizing employees’ emotional data doesn’t legitimize the use of otherwise prohibited systems. For applications in medical or security contexts, where some exceptions are provided, it becomes essential to rigorously document not only the pseudonymization measures adopted but also the actual impossibility of re-identification by all actors involved in the value chain.
The Critical Intersection Between GDPR and AI Act
The ruling also highlights the increasingly critical intersection between the GDPR and the AI Act in AI data governance, an aspect that the new EDPB Guidelines 01/2025 specifically addresses. The EDPB clarifies that “the risk reduction resulting from pseudonymisation may enable controllers to rely on legitimate interests under Art. 6(1)(f) GDPR” - a principle particularly relevant for AI systems processing large volumes of data.
Pseudonymization of training datasets, a common practice in machine learning, is not exempt from the data quality requirements set out in Article 10 of the AI Act. As highlighted by the EDPB Guidelines, controllers must “establish and precisely define the risks they intend to address with pseudonymisation” (Executive Summary). In the AI context, this means documenting in detail not only who holds the re-identification keys, but also how the implemented technical measures effectively prevent re-identification by third parties, with case-by-case assessments of pseudonymization effectiveness for each specific use context.
The Guidelines also provide concrete examples (Examples 7 and 8 in the Annex) of how pseudonymization can facilitate the balancing of interests in secondary data analysis, a common scenario in AI model training. However, the EDPB warns that “pseudonymisation alone will normally not be a sufficient measure” and must be combined with other technical and organizational safeguards.
The practical implications run deep. In federated learning, where AI models are trained on distributed and pseudonymized data, the information obligation remains firmly with the central coordinator. Generating synthetic data from pseudonymized datasets - another increasingly popular technique - doesn’t automatically eliminate the personal character of data if the process remains somehow reversible. And for every actor receiving pseudonymized data along the AI value chain, precise mapping and assessment of re-identification risk becomes necessary, as detailed in paragraph 135 of the EDPB Guidelines.
Strengthened Documentation Obligations
Particularly relevant are the documentation obligations emerging from the intersection of the ruling and the AI Act. The technical documentation file required for high-risk systems must now specify precisely:
- Who holds the re-identification keys and with what guarantees
- The specific technical measures preventing re-identification by each third party
- A detailed assessment, case by case, of the actual effectiveness of pseudonymization
Data Protection Impact Assessments (DPIAs) must be rethought to explicitly consider the “relative” nature of pseudonymized data: what is anonymous for one entity may be personal for another, and this distinction must be actively documented and managed.
Toward a New Data Protection Paradigm in the AI Era
The Court’s ruling confronts us with a reality that the technology sector must accept: pseudonymization, however sophisticated, is not a magic wand that transforms personal data into anonymous data. It is rather one tool in a broader arsenal of protection measures that must be carefully orchestrated, meticulously documented, and, above all, transparently communicated to data subjects.
In an age where artificial intelligence increasingly permeates every aspect of our daily lives - from the facial recognition that unlocks our phones to the predictive analytics that influence crucial decisions about credit, employment, and health - this ruling reminds us that one cannot sacrifice personal data protection on the altar of technological innovation. The path forward requires a delicate but necessary balance between AI’s transformative potential and individuals’ fundamental rights.
Conclusions
The EDPS/SRB ruling not only reinforces GDPR principles but also anticipates crucial challenges for the implementation of the AI Act. In the artificial intelligence context, where pseudonymization is often seen as a solution to balance innovation and privacy, the Court reminds us that:
- Pseudonymization is not a “silver bullet” for compliance, especially for high-risk AI systems
- Biometric data and emotional inferences maintain their sensitivity even when pseudonymized
- Transparency toward data subjects is an obligation that prevails over technical protection measures
For organizations developing or using AI systems, this means adopting a privacy-by-design approach that goes beyond mere pseudonymization, integrating from the design phase measures that ensure proper anonymization where possible, or full compliance with the GDPR-AI Act framework where data remains personal.
Source: CJEU Ruling C-413/23 P, EDPS/SRB, 4 September 2025
Related hashtags
#GDPR #AIAct #DataProtection #Pseudonymization #CJEU #ArtificialIntelligence #BiometricData #EmotionRecognition #EDPS #DataGovernance #PrivacyByDesign #AI #LegalTech #DataSecurity #DigitalRights #TechLaw #Compliance #CaseAnalysis #CourtRuling #DataStrategy