NicFab Newsletter
Issue 10 | March 3, 2026
Privacy, Data Protection, AI, and Cybersecurity
Welcome to issue 10 of the weekly newsletter dedicated to privacy, data protection, artificial intelligence, cybersecurity, and ethics. Every Tuesday, you will find a carefully selected list of the most relevant news from the previous week, with a focus on European regulatory developments, case law, enforcement, and technological innovation.
In this issue
- ITALIAN DATA PROTECTION AUTHORITY
- EDPB - EUROPEAN DATA PROTECTION BOARD
- EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
- EUROPEAN COMMISSION
- CNIL - FRENCH AUTHORITY
- EUROPEAN PARLIAMENT
- INTERNATIONAL DEVELOPMENTS
- ARTIFICIAL INTELLIGENCE
- CYBERSECURITY
- TECH & INNOVATION
- SCIENTIFIC RESEARCH
- AI Act in a Nutshell
- Events and meetings
- Conclusion
ITALIAN DATA PROTECTION AUTHORITY
Stop the filing of Amazon workers: the Authority prohibits unlawful processing
The Privacy Authority has ordered Amazon Italia Logistica to immediately cease processing the personal data of over 1,800 workers at the Passo Corese center. The investigation revealed the systematic collection of sensitive information via a platform linked to the attendance system, accessible to numerous managers and stored for up to 10 years after the termination of employment.
The unlawfully collected data included specific medical conditions (Crohn’s disease, herniated disc), information on union activities and strikes, as well as details about employees’ private and family lives. The measure highlights a violation of the principle of relevance: employers may process data only for assessing professional aptitude.
For DPOs, this case underscores the importance of regularly auditing HR systems and training managers who access personal data, ensuring that all processing complies with the principles of lawfulness and minimization.
EDPB - EUROPEAN DATA PROTECTION BOARD
AI-generated imagery and protection of privacy: EDPB supports the joint Global Privacy Assembly’s statement
The EDPB has endorsed a joint statement by the Global Privacy Assembly, comprising 61 global authorities, on the risks posed by generative AI for images and videos. The document addresses growing concerns about the non-consensual creation of content depicting real people, including intimate and defamatory images, with a particular focus on the risks to minors and vulnerable groups.
The statement sets out key principles for organizations developing or using generative AI systems: robust safeguards, meaningful transparency, effective mechanisms to protect individuals, and specific measures to protect minors. The authorities commit to sharing regulatory approaches and urge companies to engage with regulators proactively.
For DPOs, this represents an important signal about the future regulatory direction of generative AI, requiring particular attention to impact assessments and protective measures in projects involving content generation.
EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
Joint Statement on AI-Generated Imagery and the Protection of Privacy
The EDPS joins 60 other data protection authorities from the Global Privacy Assembly in a joint statement on the risks of AI-generated imagery. The initiative highlights growing concerns about the creation of synthetic content that can compromise people’s privacy and dignity.
The statement emphasizes how generative AI technologies can be used to create realistic images without the consent of the subjects depicted, raising critical questions about the protection of personal data. For DPOs, this development represents a new frontier of compliance to be monitored closely.
The coordinated approach of international authorities underscores the urgency of developing specific regulatory frameworks for generative AI, requiring organizations to proactively assess the risks associated with these technologies in their processes.
EUROPEAN COMMISSION
EU privacy reforms meet with strong resistance
The ambitious GDPR reforms proposed by the European Commission in the “digital omnibus” package are facing significant opposition from national governments. The negotiating document obtained by Politico reveals that member countries mainly contest the change to the definition of “personal data,” one of the most controversial changes in the package.
The proposal would aim to align the GDPR with the recent ruling of the EU Court of Justice (SRB v EDPS), allowing for more flexible use of pseudonymized data for AI development. This change would move huge amounts of data outside the scope of privacy protections, particularly benefiting tech companies and AI developers.
For DPOs, this regulatory battle represents a crucial moment: on the one hand, there is pressure to facilitate AI innovation, and on the other, the need to maintain strong data protection safeguards. European supervisory authorities have already expressed concerns, while the tech industry welcomes the reform proposals.
CNIL - FRENCH AUTHORITY
PANAME project: GDPR audit tests for AI models
The CNIL is launching the PANAME project, inviting professionals to test an innovative GDPR audit tool specifically designed for artificial intelligence models. This initiative represents a significant step in the evolution of compliance tools for emerging technologies.
For DPOs working in organizations that use AI systems, this opportunity offers early access to specialized audit methodologies. Participation in the tests will allow them to gain practical skills and contribute to the development of industry standards.
The initiative demonstrates the CNIL’s proactive approach to providing concrete tools for compliance management in the AI era, anticipating future regulatory challenges.
Public consultation on session replay tools
The CNIL is opening a public consultation on the draft recommendation on session replay tools. These technologies completely reconstruct users’ browsing history by recording every interaction: mouse movements, clicks, scrolls, and sometimes form entries.
These tools, used for UX optimization and debugging, have significant privacy implications due to their granular tracking capabilities. The consultation aims to guide both developers and users toward GDPR compliance.
For DPOs, this initiative provides an opportunity to influence future guidelines and prepare internal policies for the responsible use of these technologies, which are increasingly prevalent in digital marketing and behavioral analysis.
GDPR Day on Health Data and Research in Paris
The CNIL is organizing a training day dedicated to the application of the GDPR in the healthcare and research sectors, which are characterized by specific regulatory and operational complexities. The event represents an opportunity to learn about critical issues for many organizations.
The healthcare sector requires particular attention in the management of legal bases, consents, and processing purposes, especially in scientific research contexts where public interests and individual rights intersect.
For DPOs working in healthcare, universities, and research centers, participation offers practical insights into complex use cases and networking with colleagues who face similar challenges in their daily compliance management.
EUROPEAN PARLIAMENT
Digital Omnibus: Analysis of Interconnections in EU Digital Legislation
The European Parliament’s Think Tank has published a detailed study on the Digital Omnibus package proposed by the European Commission in November 2025. The analysis, commissioned by the IMCO Committee, focuses on identifying interconnections and overlaps among different pieces of digital legislation.
The study distinguishes between administrative simplification and more substantial recalibration of safeguards in critical areas such as data, privacy, cybersecurity, and artificial intelligence. For DPOs, particular attention should be paid to the controversies highlighted regarding legal certainty,
enforcement capabilities, and the impact on fundamental rights.
The document also provides a roadmap for parliamentary oversight, which is crucial for understanding the future evolution of the European digital regulatory framework and its operational implications for data protection.
Executive Summary of the Digital Omnibus: Guide for Parliamentary Oversight
The “At a Glance” document offers an accessible summary of the full Digital Omnibus study, providing an immediate overview of the issues most relevant to European legislators. The analysis highlights that the package proposed by the Commission aims to streamline technology rules by systematically identifying regulatory overlaps.
For privacy professionals, the study is a valuable resource for anticipating changes in the regulatory landscape. The identified areas of controversy—legal certainty, enforcement capacity, and impacts on rights—are particularly significant for those working in data protection.
The publication of both documents underscores the strategic importance the European Parliament attaches to the rationalization of digital legislation. This process could significantly influence DPO operations in the coming years.
INTERNATIONAL DEVELOPMENTS
Global coalition of regulators against uncontrolled generative AI
A coalition of more than 60 data protection authorities, including the UK’s ICO and Ireland’s DPC, has issued a clear warning to the generative AI industry: the ability to create realistic images of people does not exempt them from complying with privacy regulations. The joint statement highlights how AI models integrated into social media are facilitating the creation of non-consensual intimate content and defamatory material.
The initiative comes after formal investigations into Musk’s xAI for generating sexual images without consent via Grok. Regulators emphasize the need to implement safeguards by design, with a particular focus on risks to minors and vulnerable groups. For DPOs, this sends a clear signal: generative AI must be evaluated with the same rigorous standards applied to any other processing of personal data, requiring thorough DPIAs and adequate safeguards.
ARTIFICIAL INTELLIGENCE
Red Lines of the EU AI Act: Manipulative Techniques and Exploitation of Vulnerabilities
The EU AI Act establishes clear prohibitions for AI systems that use subliminal manipulative techniques or exploit vulnerabilities related to age, disability, or socioeconomic situations. The goal is to preserve individual decision-making autonomy and to promote human-centered, trustworthy AI.
DPOS need to understand the interaction between these prohibitions and existing regulations, such as the GDPR and the Digital Services Act. The GDPR already regulates manipulative practices through principles of fairness and privacy by design, while the DSA prohibits deceptive interfaces on online platforms. The European Commission has published guidelines clarifying how these prohibitions protect individuals from becoming “mere tools” and shield the most vulnerable from manipulation.
Anthropic-Pentagon clash: Supply Chain Risk over Military AI Controversy
The Pentagon has designated Anthropic as a “supply chain risk” after the company refused to remove restrictions on its AI models for military use. Anthropic has set two “red lines”: no domestic mass surveillance and no fully autonomous weapons.
Trump has ordered all federal agencies to cease using Anthropic technology within six months. The company argues that the designation is “legally unfounded” and sets a dangerous precedent for companies doing business with the government. The conflict highlights the tension between corporate control over AI systems and national security needs, raising crucial questions about the ethical limits of military AI.
Trump Bans Anthropic from Federal Government
Trump announced an immediate ban on the use of Anthropic’s AI tools by all federal agencies, with a six-month transition period. The conflict stems from the Pentagon’s request to lift restrictions on Claude models to allow “all legal uses” of the technology.
Anthropic, the first AI company to partner with the military on a $200 million contract, uses its Claude Gov models for intelligence analysis and military planning. Hundreds of Google and OpenAI employees have signed an open letter in support of Anthropic. This clash tests the limits of the relationship between Silicon Valley and the defense sector, redefining the ethical boundaries of military AI.
Anthropic vs. the Pentagon: What’s Really at Stake
The conflict between Anthropic and the Pentagon raises a fundamental question: Who controls advanced AI systems, the companies that develop them or the government that wants to use them? Anthropic refuses to allow its models to be used for mass surveillance of Americans and fully autonomous weapons without human control.
The Department of Defense already uses highly automated systems and does not categorically prohibit autonomous weapons. A 2023 directive allows AI systems to select and engage targets without human intervention, provided they meet certain standards. For DPOs, this case highlights the importance of maintaining ethical safeguards, even in government contracts, while balancing national security and AI responsibility principles.
Google and OpenAI Employees Support Anthropic’s Position
Over 300 Google employees and 60 OpenAI employees have signed an open letter supporting Anthropic’s position against the Pentagon’s requests. The signatories urge their companies’ leaders to “put aside differences and stand together” to maintain boundaries against mass surveillance and fully automated weapons.
Sam Altman of OpenAI stated that his company also shares Anthropic’s “red lines,” while Jeff Dean of Google expressed opposition to mass surveillance as a violation of the Fourth Amendment. This solidarity among tech employees demonstrates a growing ethical awareness in the AI industry and the importance of maintaining common principles of responsibility, even in the face of government pressure.
CYBERSECURITY
CISA warns of RESURGE malware dormant on Ivanti devices
CISA has released new details about the RESURGE malware, which is being used in zero-day attacks against Ivanti Connect Secure devices via the CVE-2025-0282 vulnerability. The malware, exploited by the Chinese group UNC5221 since December 2024, employs sophisticated evasion techniques that keep it dormant on compromised systems.
RESURGE is a passive implant that does not actively communicate with the C2 but waits indefinitely for specific TLS connections, using forged Ivanti certificates for authentication. For DPOs, this case highlights the importance of implementing advanced network monitoring and incident response procedures that also consider dormant and persistent threats.
Canadian Tire data breach affects 38 million accounts
Canadian Tire suffered a significant data breach that compromised 38 million customer accounts. The stolen data includes names, addresses, email addresses, phone numbers, and encrypted passwords. The scale of the breach once again highlights the importance of robust data protection strategies in the retail sector.
For DPOs, this incident is a case study in managing mass breaches, highlighting the need for effective communication plans and procedures for notifying the relevant authorities. The inclusion of encrypted passwords in the compromised data requires particular attention in risk assessment and recommendations to data subjects.
Cisco SD-WAN has been exploited for administrative access since 2023.
Cisco has revealed that the critical vulnerability CVE-2026-20127 (CVSS 10.0) in its SD-WAN systems has been exploited by the UAT-8616 group since 2023. The flaw allows unauthenticated attackers to gain administrative privileges by bypassing authentication via crafted requests, compromising the peering authentication mechanism.
The case highlights how zero-day vulnerabilities can remain exploited for years before they are discovered. For DPOs, this underscores the importance of implementing continuous monitoring controls and regular log audits, particularly for systems exposed to the internet. The compromise of SD-WAN infrastructure can have significant impacts on the security of the entire corporate network.
Hackers exploit Claude code in attack on Mexican government
A sophisticated cyberattack against the Mexican government used Claude AI to automate the compromise stages. The attackers exploited artificial intelligence to write exploits, create custom tools, and automate the exfiltration of over 150GB of sensitive government data.
This case represents a worrying evolution in attackers’ tactics, as they now use generative AI to enhance their offensive capabilities. For DPOs, there is a need to consider the malicious use of AI in risk assessments, implement specific controls to detect anomalous automated activity, and develop defense strategies that account for these new hybrid threats.
ManoMano data breach: 38 million users affected
The e-commerce platform ManoMano suffered a significant data breach that compromised the personal data of 38 million users. The stolen information includes names, email addresses, phone numbers, and other personal information, constituting a major incident in the European e-commerce sector.
The incident highlights the vulnerability of e-commerce platforms, which handle huge amounts of personal data and are prime targets for cybercriminals. For DPOs operating in the sector, this case underscores the importance of implementing security measures commensurate with the volume of data processed and effective breach notification procedures to manage impacts on a continental scale.
TECH & INNOVATION
ClawJacked: Critical Vulnerability in OpenClaw
A high-severity vulnerability, called ClawJacked, has affected OpenClaw, allowing malicious websites to take control of local AI agents via WebSocket. The attack exploits the absence of rate limiting for localhost connections, allowing brute-force password attacks at hundreds of attempts per second.
The vulnerability is particularly insidious because it does not require additional installations: a developer only needs to visit a compromised site to activate the exploit via JavaScript. Once authenticated, the attacker gains complete control of the AI agent, access to its configurations, and access to application logs. OpenClaw released a patch within 24 hours, but the incident highlights the growing risks associated with AI agents with privileged access to corporate systems.
Reddit Fined £14.5M for Minor Data Breach
The UK ICO has fined Reddit £14.47 million for unlawfully processing data from children under 13. The platform had not implemented robust age-verification mechanisms, thereby violating regulations requiring parental consent for this age group.
The Authority also noted the absence of a specific DPIA on the risks associated with children’s data until January 2025. Reddit has announced an appeal, arguing that collecting more private information would conflict with user privacy. This penalty marks the second-highest ICO fine for violating the Children’s Code, signaling an increasingly strict approach to protecting children online.
Age Verification: Surveillance Honeypot Exposed
Security researchers have discovered 2,456 publicly accessible files on a US government server, revealing an extensive surveillance system hidden behind Discord’s age verification. The system is not limited to personal data checks but performs 269 distinct checks, scanning the internet and government sources.
The platform analyzes and stores biometric data, device fingerprints, IP addresses, and even selfie backgrounds for three years, comparing faces with databases of politically exposed persons. This discovery confirms fears about the creation of centralized databases of sensitive data, transforming simple identity checks into mass surveillance systems that are particularly vulnerable to cyberattacks.
Claude Code: Critical Vulnerabilities in AI Assistant
Anthropic has fixed multiple security vulnerabilities in Claude Code that allowed remote code execution and the theft of API credentials. The flaws exploited configuration mechanisms such as Hooks, MCP servers, and environment variables when developers opened untrusted repositories.
The vulnerabilities included user-consent bypass (CVSS 8.7), automatic execution of shell commands during initialization, and exfiltration of Anthropic API keys before authorization was requested. Simply launching Claude Code in a malicious directory was enough to compromise credentials and redirect API traffic to infrastructure controlled by attackers. Patches were released between September 2025 and January 2026.
SCIENTIFIC RESEARCH
Selection of the most relevant papers of the week from arXiv on AI, Machine Learning, and Privacy
Differential Privacy and Federated Learning
Tackling Privacy Heterogeneity in Differentially Private Federated Learning addresses a critical challenge for DPOs: customers have heterogeneous, non-uniform privacy requirements. The research proposes selection strategies that consider variable privacy budgets, a key element for GDPR-compliant FL implementations in multi-organization contexts. arXiv
JSAM: Privacy Straggler-Resilient Joint Client Selection and Incentive Mechanism Design introduces incentive mechanisms that manage “privacy stragglers”—customers with high privacy requirements. This is essential for balancing data protection and the economic sustainability of FL systems, a crucial aspect for compliance officers in assessing privacy costs and benefits. arXiv
Practical Differential Privacy Implementations
DPSQL+: A Differentially Private SQL Library with a Minimum Frequency Rule integrates differential privacy with minimum frequency rules for SQL queries. Particularly relevant for compliance, it combines theoretical DP guarantees with practical governance requirements, offering concrete tools for privacy-preserving data analysis in enterprise environments. arXiv
Differentially Private Truncation of Unbounded Data via Public Second Moments addresses DP limitations for unbounded data by leveraging second moments from public datasets. It broadens the practical applicability of DP in real-world scenarios where data distribution is not known a priori, a crucial factor for privacy impact assessments. arXiv
Machine Unlearning and Privacy
Easy to Learn,
Yet Hard to Forget: Towards Robust Unlearning Under Bias highlights the phenomenon of “shortcut unlearning” in biased models. Critical for GDPR right to be forgotten: demonstrates how bias can compromise data deletion effectiveness, requiring more sophisticated approaches to ensure effective compliance. arXiv
Mitigating Membership Inference in Intermediate Representations proposes layer-aware DP-SGD to counter membership inference attacks. Important for DPOs: it offers granular protections against unauthorized inferences in intermediate representations, an area often overlooked in privacy assessments. arXiv
Privacy in Advanced Systems
CQSA: Byzantine-robust Clustered Quantum Secure Aggregation explores quantum-secure aggregation in FL, offering information-theoretic privacy. Although emerging, it offers prospects for ultra-robust privacy protections, relevant to long-term strategic planning for corporate privacy. arXiv
TrajGPT-R: Generating Urban Mobility Trajectory develops a transformer framework for generating privacy-preserving urban mobility trajectories. Significant for smart cities and GDPR: it enables urban analysis while protecting personal mobility data, a growing use case for public entities. arXiv
AI ACT IN A NUTSHELL - Part 10
Article 14 - Human Oversight
After analyzing the obligations of transparency and information provision to deployers under Article 13 in the previous part, we now delve into one of the most innovative and controversial requirements of the AI Act Regulation: Article 14, dedicated to human oversight of high-risk AI systems.
The fundamental principle: the human being at the center
Article 14 establishes that high-risk AI systems must be designed and developed in such a way—including through appropriate human-machine interface tools—that natural persons can effectively supervise them during their period of use. This requirement translates the principle of the anthropocentric approach to artificial intelligence into a binding legal rule. No matter how sophisticated, a machine cannot operate without the possibility of meaningful human intervention.
Human oversight has a specific objective: to prevent or minimize risks to health, safety, or fundamental rights that may arise when a high-risk AI system is used in accordance with its intended purpose or under reasonably foreseeable conditions of misuse, in particular when such risks persist despite the application of the other requirements of the Regulation.
Oversight measures: a two-tier system
The Regulation provides that oversight measures shall be proportionate to the risks, the level of autonomy, and the context of use of the system, and may be ensured through two types of intervention,
which may also be combined: measures identified and integrated into the system by the supplier before it is placed on the market, and measures identified by the supplier but intended to be implemented by the deployer.
This dual track reflects the legislator’s awareness that human oversight is not the sole responsibility of a single actor, but is distributed throughout the AI value chain, requiring structured collaboration between those who design the system and those who use it operationally.
The capabilities required of the human supervisor
Article 14(4) details the capabilities that must be guaranteed to the natural persons responsible for supervision. The system must be provided in such a way that the human supervisor can: adequately understand the capabilities and limitations of the system, monitor its operation by detecting anomalies and malfunctions; remain aware of the possible tendency to automatically or excessively rely on the system’s outputs (so-called automation bias); correctly interpret the system’s outputs; decide not to use the system or to ignore, override, or reverse its output; intervene in the operation of the system or interrupt it through a stop button or similar procedure that allows the system to stop in a safe state.
The reference to automation bias is particularly significant: the legislator expressly recognizes the cognitive risk that human operators tend to passively accept AI recommendations, transforming formal oversight into a mere act of automatic ratification.
The special case of biometric identification
Article 14(5) introduces a reinforced safeguard for high-risk remote biometric identification systems: no action or decision may be taken by the deployer based on the identification produced by the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary expertise, training, and Authority. This provision embodies the Regulation’s highest level of caution: where the risk to fundamental rights is highest, human oversight must be dual and independent.
Practical implications for organizations
For DPOs, compliance officers, and legal advisors, Article 14 has significant operational implications. Organizations operating as deployers will need to identify and train personnel with the appropriate skills to perform the role of human supervisor, establishing documented procedures for the effective exercise of oversight. Suppliers, for their part, will need to design interfaces that make meaningful human control technically possible, going beyond mere formal compliance.
The most significant challenge concerns the relationship between Article 14 of the AI Act and Article 22 of the GDPR on the right not to be subject to decisions based solely on automated processing. The two regulations converge in requiring substantial human involvement,
not merely nominal, in decisions that significantly affect individuals. The human oversight required by the AI Act must therefore be integrated with the safeguards already provided for in data protection legislation, requiring a coordinated approach to compliance.
Penalties
Failure to comply with human oversight requirements may result in administrative fines of up to €15 million or, for companies, up to 3% of the previous year’s global annual turnover. Verification of the effectiveness of oversight measures will be a central element in compliance assessments by the competent authorities.
In the next installment, Part 11, we will analyze Article 15, which addresses the accuracy, robustness, and cybersecurity of high-risk AI systems, and the fundamental technical requirements that complete the framework of mandatory requirements for high-risk systems.
Noteworthy events and meetings
AI-generated imagery and protection of privacy: EDPB supports joint Global Privacy Assembly’s statement (published February 23, 2026)
EDPB | Info
Meeting Data Protection Working Group, Council (published February 27, 2026)
EDPB | Info
117th Plenary meeting (published on March 18, 2026)
EDPB | Info
Blog post: Advancing into Practice: Third Meeting of the AI Act Correspondents Network
EDPS | Info
European Union endorses Leaders’ Declaration at AI Summit in India
European Commission | Info
Executive Vice-President Virkkunen in India for summit on artificial intelligence
European Commission | Info
Conclusion
The global coordination of privacy authorities on generative AI and the tightening of workplace controls outline a rapidly evolving regulatory framework, where traditional enforcement is combined with the new challenges posed by artificial intelligence. The joint statement by 61 authorities coordinated by the Global Privacy Assembly represents a paradigm shift: no longer fragmented national approaches, but a unified strategy that recognizes how AI-generated imagery transcends any jurisdictional boundaries.
The Amazon Italy case crystallizes this evolution with particular clarity. The Garante’s complaint is not limited to traditional monitoring of work performance but involves a form of systemic profiling encompassing medical conditions, union activities, and family dynamics. That reveals a model of total surveillance that transforms the employment relationship into a behavioral observation laboratory. The documentation compiled by the Data Protection Authority reveals how workforce management algorithms can generate inferences that far exceed their stated purposes, creating archives of personal vulnerabilities that could potentially be used for discriminatory automated decisions.
This dynamic is directly linked to the EDPB’s concerns about generative AI. Both phenomena share the same matrix: the massive collection of personal data to feed algorithmic systems that produce new categories of risk. In the Amazon case, worker data is processed to optimize production processes; in generative AI tools, images and personal content are used for training without explicit consent. In both contexts, the individuals concerned lose control over their data and suffer the consequences of opaque algorithmic decisions.
The resistance of national governments to the Commission’s proposed reforms in the Digital Omnibus adds a further layer of complexity. While Brussels is pushing for regulatory simplifications that favor European competitiveness in AI, member states are showing caution on the most sensitive issues, such as the redefinition of “personal data” in relation to pseudonymization. This tension reflects a structural dilemma: accelerating European innovation risks weakening the privacy protections that represent the distinctive value of the GDPR model.
The CNIL initiative on the PANAME project introduces a pragmatic perspective into this scenario. The GDPR audit specific to AI models represents an attempt to operationalize compliance in a sector where traditional assessment tools have clear limitations. The possibility of publicly testing these tools underscores authorities’ recognition of the need for innovative approaches to govern rapidly evolving technologies.
The issue of human oversight, analyzed in the AI Act in a Nutshell section with Article 14, is the common thread running through all the issues that emerged this week. The profiling of Amazon workers represents precisely the type of risk that human oversight aims to prevent: algorithmic decisions operating without effective human oversight, producing invasive and discriminatory profiling.
The requirement of Article 14, which requires deployers to be able to intervene, override, or interrupt high-risk AI systems, is demonstrated concretely in the Italian case. Similarly, the joint statement on AI-generated imagery highlights what happens when generative systems operate without adequate human supervision, producing non-consensual content that infringes on fundamental rights.
For DPOs and compliance professionals, this week marks a phase of greater operational complexity. The Amazon case demonstrates that workplace monitoring requires audits that go beyond mere legal compliance checks. It is necessary to map all “satellite processing” arising from algorithmic processing, identifying the risks of undeclared profiling and indirect discrimination. Organizations will need to develop specific skills to assess how AI systems can generate inferences about employees that exceed their original purposes.
International convergence on generative AI also requires a review of global compliance strategies. Companies operating in multiple markets will no longer be able to fragment their privacy approaches by jurisdiction, but will have to adopt uniform standards that meet the highest common denominator.
The tensions that emerged this week raise fundamental questions about the future of the European digital ecosystem. Will the GDPR model evolve while maintaining its identity as a guarantor of rights? Does the international standardization of AI rules represent an opportunity to export European values, or does it risk diluting them in downward compromises? How will organizations balance innovation and compliance in an increasingly dynamic regulatory environment? The answers to these questions will determine not only the effectiveness of privacy protections but also the very sustainability of the European digital sovereignty project.
📧 Edited by Nicola Fabiano
Lawyer - Fabiano Law Firm
🌐 Studio Legale Fabiano: https://www.fabiano.law
🌐 Blog: https://www.nicfab.eu
🌐 DAPPREMO: www.dappremo.eu
