NicFab Newsletter
Issue 7 | February 10, 2026
Privacy, Data Protection, AI, and Cybersecurity
Welcome to issue 7 of the weekly newsletter dedicated to privacy, data protection, artificial intelligence, cybersecurity and ethics. Every Tuesday you will find a curated selection of the most relevant news from the previous week, with a focus on European regulatory developments, case law, enforcement and technological innovation.
In this issue
- EDPB - EUROPEAN DATA PROTECTION BOARD
- EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
- EUROPEAN COMMISSION
- DIGITAL MARKETS & PLATFORM REGULATION
- INTERNATIONAL DEVELOPMENTS
- ARTIFICIAL INTELLIGENCE
- CYBERSECURITY
- TECH & INNOVATION
- SCIENTIFIC RESEARCH
- AI Act in Pills
- Events and Meetings
- Conclusion
EDPB - EUROPEAN DATA PROTECTION BOARD
Opinion 1/2026 on the draft decision of the Dutch Supervisory Authority regarding the Controller Binding Corporate Rules of the Heineken Group
The EDPB has issued its opinion on Heineken Group’s Controller Binding Corporate Rules (BCRs), marking another step in the approval process for these critical data transfer mechanisms. BCRs remain one of the most robust legal bases for international data transfers within corporate groups, particularly following the uncertainty created by the Schrems II decision.
For DPOs in multinational organizations, this opinion reinforces the continued viability of the BCR route, albeit with the extensive documentation and compliance requirements that the approval process entails. The involvement of the Dutch authority suggests Heineken’s European headquarters structure, highlighting how lead supervisory authority determinations impact the BCR approval pathway.
Organizations considering BCRs should monitor the specific requirements and conditions outlined in this opinion, as they often signal evolving EDPB expectations for corporate data transfer governance frameworks.
Opinion 4/2026 on the draft decision of the Dutch Supervisory Authority regarding the Controller Binding Corporate Rules of the ABB Group
The EDPB’s review of ABB Group’s Controller BCRs represents another significant approval in the engineering and technology sector. ABB’s global operations span numerous jurisdictions with varying data protection requirements, making BCRs particularly valuable for ensuring consistent data protection standards across their international corporate structure.
This opinion is especially relevant for DPOs in industrial and technology companies with complex global supply chains and operational networks. The technical nature of ABB’s business likely involves sophisticated data processing activities, including IoT and industrial automation data, which may present unique challenges in the BCR framework.
DPOs should examine how ABB addresses sector-specific data processing activities within their BCR structure, as this may provide valuable precedents for similar organizations navigating the intersection of industrial operations and data protection compliance.
Opinion 2/2026 on the draft decision of the Dutch Supervisory Authority regarding the Controller Binding Corporate Rules of the AkzoNobel Group
AkzoNobel’s BCR approval process adds another major chemical and materials company to the growing list of organizations successfully implementing these comprehensive data transfer frameworks. The company’s global presence in paints, coatings, and specialty chemicals requires sophisticated data governance to manage customer, supplier, and operational data across multiple continents.
For DPOs in manufacturing and chemical industries, AkzoNobel’s approach may offer insights into addressing the unique data challenges of industrial operations, including supply chain data, research and development information, and regulatory compliance data that must flow between jurisdictions with different legal requirements.
The timing of these multiple Dutch authority BCR approvals suggests either a coordinated review process or increased activity in the Netherlands’ business community toward implementing robust international data transfer mechanisms.
Opinion 3/2026 on the draft decision of the Dutch Supervisory Authority regarding the Controller Binding Corporate Rules of the FrieslandCampina Group
FrieslandCampina’s BCR approval addresses the data protection needs of one of the world’s largest dairy cooperatives, whose operations span from dairy farming to consumer products across multiple continents. The agricultural and food processing sector presents unique data protection challenges, including farmer data, supply chain information, and consumer data across diverse regulatory environments.
This opinion is particularly relevant for DPOs in the food and agriculture sector, where data flows often involve sensitive commercial information, agricultural data, and consumer preferences that require careful protection while enabling global business operations. The cooperative structure of FrieslandCampina may also provide insights for similar member-based organizations.
The agricultural sector’s increasing digitization, including precision farming and supply chain transparency initiatives, makes robust international data transfer mechanisms like BCRs increasingly critical for organizations operating in this space.
Register now for our conference on cross-regulatory cooperation in the EU (17 March)
The EDPB’s upcoming conference on cross-regulatory cooperation represents a significant opportunity for data protection professionals to understand how GDPR intersects with other EU regulatory frameworks. Scheduled for March 17, 2026, in Brussels, this event addresses the increasingly complex landscape where data protection requirements interact with digital services, AI, telecommunications, and financial services regulations.
For DPOs, this conference is particularly valuable as organizations face compliance challenges across multiple regulatory domains simultaneously. The rise of sector-specific regulations like the AI Act, Digital Services Act, and NIS2 Directive creates new compliance intersections that require coordinated approaches.
Registration closes February 26, 2026, with both in-person and remote participation options available. DPOs should prioritize attendance to gain insights into regulatory coordination mechanisms and best practices for managing multi-regulatory compliance frameworks within their organizations.
EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
🎙️ TechSonar Podcast: TechSonar: The “illusion of education” with a personalised learning
February 8, 2026
Do AI-driven personalised learning systems empower students? Or do they turn education into continuous monitoring and profiling? How much autonomy do learners have when algorithms decide what comes next? As concerns about children’s rights, bias, consent and inequality grow, this episode explores, with our expert Saskia Kaspkias, where the ethical and legal boundaries should be drawn.
🎙️ TechSonar Podcast: TechSonar: Confidential computing - a shield against attacks?
February 2, 2026
As data increasingly moves to the cloud, can we really trust who has access to it while it is being processed? Does protecting data ‘in use’ fundamentally change long-standing security or does it simply shift the risk elsewhere? As confidential computing technology converges with AI and privacy-enhancing tools, questions around trust, control and hardware dependency become crucial.
EUROPEAN COMMISSION
Commission Preliminarily Finds TikTok’s Addictive Design in Breach of the Digital Services Act
The European Commission has issued preliminary findings that TikTok violates the Digital Services Act through its addictive design features. This marks a significant enforcement action under the DSA, which requires very large online platforms to assess and mitigate systemic risks, including those related to mental health and user well-being.
For DPOs, this development signals the Commission’s willingness to take strong action on platform design elements that may harm users, particularly minors. The case demonstrates how digital rights protection extends beyond traditional data protection into broader user safety concerns. Organizations operating similar platforms should review their recommendation algorithms and engagement mechanisms to ensure compliance with both DSA requirements and GDPR principles around data minimization and purpose limitation.
This preliminary finding could result in substantial fines and mandatory design changes, setting important precedents for how addictive features will be regulated across the digital landscape.
Keynote Speech by Commissioner Kubilius at the AI in Defence Summit 2026
Commissioner Kubilius addressed the intersection of artificial intelligence and defense applications at the AI in Defence Summit, highlighting the European approach to AI governance in sensitive sectors. The speech likely covered the EU’s strategic autonomy goals while maintaining ethical AI standards, particularly relevant as the AI Act implementation continues.
For DPOs working with AI systems, especially in critical sectors, this signals continued European emphasis on responsible AI development. The defense context underscores the importance of robust governance frameworks that balance innovation with fundamental rights protection. Organizations deploying AI in high-risk applications should ensure their compliance frameworks address both the AI Act’s requirements and sector-specific regulations.
The timing suggests the Commission is actively shaping the narrative around AI governance as implementation deadlines for the AI Act approach, emphasizing that ethical considerations remain paramount even in strategic applications.
Daily News Update from European Commission
The Commission’s daily news summary highlighted the TikTok enforcement action as a key development, demonstrating the ongoing priority placed on Digital Services Act implementation. This daily communication channel reflects the Commission’s commitment to transparency in its regulatory enforcement activities.
For DPOs, monitoring these daily updates provides valuable insight into enforcement priorities and emerging regulatory trends. The prominence given to the TikTok case suggests that platform governance and user protection remain high on the Commission’s agenda, alongside ongoing AI regulatory developments.
Regular review of Commission communications helps organizations stay ahead of regulatory shifts and understand how enforcement patterns may affect their own compliance obligations across the expanding digital regulatory landscape.
Submarine cable security: €347 million investment and new toolbox
The European Commission has unveiled a package of measures to strengthen the security and resilience of Europe’s submarine data cable infrastructure, which carries 99% of intercontinental internet traffic. The new Cable Security Toolbox introduces six strategic and four technical support measures, jointly developed by the Commission and Member States within the Cables Expert Group.
The Connecting Europe Facility (CEF) Digital Work Programme has been amended to allocate €347 million for strategic submarine cable projects: in 2026, two funding calls worth €60 million will support cable repair modules, while a separate €20 million call targets SMART equipment for real-time monitoring. An additional €267 million will be allocated through two calls in 2026 and 2027 for Cable Projects of European Interest (CPEIs), with 13 priority areas identified across three five-year phases up to 2040.
For DPOs and critical infrastructure security officers, this initiative confirms the EU’s increasing focus on protecting strategic communication networks, in a geopolitical context marked by deliberate sabotage incidents, such as those recorded in the Baltic Sea. The resilience of submarine digital infrastructure is becoming an essential element in risk assessments and business continuity plans.
Guidelines to protect media content on online platforms
The European Commission has released new guidelines to ensure that professional journalism is recognised and protected across the world’s largest digital platforms. The guidelines support Very Large Online Platforms (VLOPs), as defined by the Digital Services Act, and media service providers in implementing the relevant provisions of the European Media Freedom Act (EMFA).
Article 18(1) of the EMFA introduces concrete safeguards against unjustified removal of journalistic content produced according to professional standards. VLOPs are required to notify media service providers in advance when they intend to remove journalistic content, clearly explain the reasons for their decision, and grant a 24-hour response period before the removal takes effect.
For DPOs working in the editorial and media sector, these guidelines represent an important protective instrument that strengthens the balance between content moderation and press freedom. The convergence between EMFA and DSA creates an integrated framework requiring platforms to more carefully assess journalistic content before taking moderation action.
DIGITAL MARKETS & PLATFORM REGULATION
EU Declares TikTok’s Design Features Violate Digital Services Act
The European Commission has issued preliminary findings that TikTok’s core design features—including infinite scroll, autoplay, push notifications, and personalized recommendation systems—constitute violations of the Digital Services Act. This marks the first time Brussels has established a global legal standard on addictive platform design, targeting features that “reward” users with constant content and shift brains into “autopilot mode.”
The Commission found TikTok failed to adequately assess risks to users’ physical and mental health, particularly for minors and vulnerable adults. Existing mitigation measures like parental controls and time management tools were deemed insufficient and easily circumvented. TikTok now faces potential fines up to 6% of global annual revenue unless it implements structural changes, including disabling infinite scroll and introducing mandatory usage breaks.
For DPOs, this represents a significant expansion of privacy and digital rights enforcement into UX design territory, requiring closer collaboration with product teams to assess how interface elements impact user wellbeing and data processing behaviors.
Brussels Sets Precedent on Platform Addiction Under DSA
EU tech chief Henna Virkkunen emphasized that these preliminary findings establish the first global legal standard on addictive design, with particular focus on protecting minors who “don’t have the same tools” to avoid compulsive behavior. The investigation, ongoing since February 2024, expands beyond traditional content moderation to examine how platform architecture itself creates systemic risks.
The Commission’s approach will likely influence ongoing investigations into Meta’s Facebook and Instagram platforms, which face similar scrutiny over addictive algorithms. TikTok has strongly rejected the findings as “categorically false,” setting up a significant legal battle that could reshape how social media platforms operate across the EU.
This development signals a regulatory shift toward treating platform design as a fundamental rights issue, requiring DPOs to consider how user interface decisions intersect with data protection principles around fair processing and user autonomy.
Global Momentum Builds for Teen Social Media Restrictions
The EU’s action against TikTok coincides with broader international movement toward protecting minors online. Spain recently announced plans to block social media access for under-16s, following Australia’s pioneering ban on 10 platforms for teenagers. France and the UK are considering similar measures, creating a global regulatory environment increasingly hostile to unrestricted youth access to social platforms.
TikTok’s ownership by China’s ByteDance adds geopolitical complexity, particularly as the company navigates a recent US deal creating an American-majority joint venture while maintaining ByteDance control over core business functions. This regulatory pressure spans multiple jurisdictions simultaneously, creating unprecedented compliance challenges.
For DPOs, the convergence of age verification requirements, design restrictions, and cross-border data governance creates a complex web of obligations requiring comprehensive privacy impact assessments that consider developmental psychology alongside traditional data protection concerns.
Apple Ads and Apple Maps: no designation under the Digital Markets Act
The European Commission has found that Apple’s online advertising service Apple Ads and Apple’s online intermediation service Apple Maps should not be designated under the Digital Markets Act. The decision follows Apple’s notification of these services on 27 November 2025, in which the company argued that the notified services should not qualify as important gateways between business users and end users.
The Commission concluded that Apple does not qualify as a gatekeeper in relation to Apple Ads and Apple Maps, based on several considerations: Apple Maps has a relatively low overall usage rate in the EU compared to competitors, while Apple Ads has very limited scale in the European online advertising sector. This decision does not affect Apple’s existing gatekeeper designation for other core platform services (iOS, iPadOS, App Store) made in September 2023 and April 2024.
For DPOs and digital compliance professionals, this decision illustrates how gatekeeper assessment is service-specific and not automatically extendable to all services of an already designated company. The Commission will continue to monitor market developments regarding these services, should any substantial changes arise.
INTERNATIONAL DEVELOPMENTS
UK Data Watchdog Opens Grok Investigation
The UK’s Information Commissioner’s Office has launched a formal investigation into Elon Musk’s X and xAI companies following reports that the Grok AI system generated sexualized deepfakes using personal data. The probe examines whether personal data was processed lawfully and whether adequate safeguards were implemented to prevent harmful content generation.
This investigation highlights critical considerations for DPOs managing AI systems that process personal data. The case demonstrates how AI applications can create unexpected data protection risks, particularly around consent, lawful basis, and data minimization principles. Organizations deploying AI tools must ensure robust governance frameworks that anticipate and prevent misuse of personal data.
The parallel investigations by Ofcom under the Online Safety Act show the multi-regulatory landscape organizations face. DPOs should note the ICO’s emphasis on protecting children’s data and the “significant potential harm” standard when assessing AI system risks.
EU-US Data Sharing Deal Raises Surveillance Concerns
The European Commission is negotiating unprecedented data sharing arrangements with US border authorities, potentially granting access to Europeans’ fingerprints, law enforcement records, and social media history. The talks continue despite growing concerns about American surveillance practices and calls from MEPs to suspend negotiations given the current geopolitical context.
For DPOs, this development underscores the complexity of international data transfers in an increasingly fragmented privacy landscape. The proposed Enhanced Border Security Partnerships would create the first large-scale personal data sharing framework for border control purposes with a non-EU country, potentially setting significant precedents for future transfer mechanisms.
Organizations should monitor these negotiations closely as they may impact adequacy decisions and transfer mechanisms. The EU Data Protection Supervisor’s concerns about the unprecedented scope highlight the need for robust safeguards in any international data sharing arrangements.
ARTIFICIAL INTELLIGENCE
AI Chatbots Are Not Your Friends, Experts Warn
A new international AI safety report highlights concerning trends in AI companion usage, with platforms like Replika and Character.ai reaching tens of millions of users seeking emotional connections with chatbots. Leading AI researcher Yoshua Bengio warns that even general-purpose tools like ChatGPT can develop companion-like relationships through repeated interactions, potentially increasing loneliness and reducing real social connections.
For data protection officers, this trend raises significant concerns about emotional manipulation and consent validity. The sycophantic nature of these systems—designed to please users rather than serve their genuine interests—mirrors social media’s psychological exploitation patterns. European Parliament lawmakers are already pressuring the Commission to examine potential restrictions under the EU AI Act, particularly regarding children’s mental health impacts.
The regulatory response will likely focus on horizontal legislation addressing multiple AI risks simultaneously, requiring DPOs to develop comprehensive frameworks for managing AI systems that collect intimate emotional data while potentially harming user wellbeing.
Deepfakes Spreading and More AI Companions: Seven Takeaways from Safety Report
The latest International AI Safety report reveals dramatic improvements in AI capabilities, with systems achieving gold-level performance in mathematical competitions and reasoning tasks showing “very significant jumps.” However, these advances come with persistent vulnerabilities—AI systems remain prone to hallucinations and struggle with autonomous long-term projects, though their ability to complete software engineering tasks is doubling every seven months.
The proliferation of deepfakes presents immediate data protection challenges, with synthetic content becoming increasingly sophisticated and widespread. For DPOs, this creates dual concerns: protecting individuals from deepfake abuse while ensuring AI training datasets don’t violate privacy rights. The report’s emphasis on “jagged” AI capabilities—exceptional in some areas, limited in others—underscores the complexity of implementing appropriate safeguards.
This technological landscape demands nuanced regulatory approaches that account for rapid capability improvements while addressing current limitations and emerging risks in automated decision-making systems.
How SAP Is Modernising HMRC’s Tax Infrastructure with AI
HMRC’s selection of SAP to overhaul its core revenue systems through the Enterprise Tax Management Platform represents a significant shift toward AI-native public sector infrastructure. Rather than retrofitting existing systems with AI capabilities, this approach embeds machine learning and automated decision-making into the fundamental architecture of tax administration.
This modernisation strategy offers valuable lessons for DPOs managing AI integration across large organisations. By replacing legacy systems entirely, HMRC avoids the compliance complications of layering AI tools over incompatible infrastructure. However, this comprehensive approach also amplifies data protection risks, as AI systems will have unprecedented access to sensitive tax information and automated decision-making authority.
The project highlights the importance of privacy-by-design principles in AI system procurement, ensuring that data protection frameworks are built into the core architecture rather than added as afterthoughts to AI-enabled government services.
CYBERSECURITY
NIS2 applicata alle PMI: strategie economiche per un approccio efficace
La direttiva NIS2 rappresenta una sfida significativa per le PMI italiane, che storicamente hanno sottovalutato l’importanza della cybersecurity. Il rapporto CLUSIT 2025 evidenzia un dato allarmante: il 37,8% delle PMI ha subito attacchi informatici, con il 50% degli attacchi concentrati su aziende con fatturato inferiore al milione di euro. Questo scenario dimostra come i cybercriminali privilegino strategie di massa piuttosto che target specifici di alto profilo.
Per i DPO che operano nel settore delle PMI, questi dati sottolineano l’urgenza di sviluppare strategie di compliance NIS2 economicamente sostenibili. La crescita del 45,5% dei reati informatici denunciati tra il 2019 e il 2023, unita al fatto che solo il 32% degli imprenditori adotta almeno 7 delle 11 misure ISTAT monitorate, evidenzia un gap critico tra rischi reali e percezione della minaccia che deve essere colmato attraverso formazione mirata e sensibilizzazione.
TECH & INNOVATION
From Chatbot to Checkout: Who Pays When Transactional Agents Play?
As AI agents become capable of making autonomous purchases on behalf of users, data protection officers face new challenges around liability and consent. Companies like OpenAI and Perplexity are integrating native checkout features into their platforms, while Amazon’s “Buy For Me” feature demonstrates how transactional agents are reshaping e-commerce.
The legal landscape remains murky when these AI systems make errors or unauthorized purchases. While existing frameworks like the Uniform Electronic Transactions Act may apply, DPOs must consider how consent mechanisms work when agents act autonomously. Key concerns include ensuring users maintain meaningful control over their data and purchase decisions, implementing robust error prevention systems, and maintaining detailed audit logs.
Organizations deploying transactional agents should establish clear contractual terms about data usage, implement strong authentication before purchases, and ensure transparency about how personal and payment data flows through these AI-driven transactions.
Data Breach at Govtech Giant Conduent Balloons
Government technology contractor Conduent’s January 2025 ransomware attack has affected far more individuals than initially disclosed, with impacts now reaching at least 15.4 million Texans and 10.5 million Oregonians. The stolen data includes names, Social Security numbers, medical records, and health insurance information across multiple states.
This breach highlights critical risks for organizations handling government data at scale. Conduent processes information for over 100 million Americans through various healthcare programs, making the potential scope massive. The company’s delayed and incomplete disclosure raises questions about breach notification compliance and transparency obligations.
DPOs should note the extended timeline—the attack occurred in January 2025, was disclosed in April, with notifications still ongoing into 2026. This demonstrates the complexity of assessing third-party vendor risks when dealing with government contractors who handle vast datasets across multiple jurisdictions and regulatory frameworks.
“ICE Out of Our Faces Act” Targets Biometric Surveillance
Senate Democrats have introduced legislation to ban ICE and CBP from using facial recognition and other biometric surveillance technologies. The proposed bill would require deletion of previously collected biometric data and allow individuals to sue for damages after violations.
While the bill faces slim chances in a Republican-controlled Congress, it signals growing legislative scrutiny of government biometric surveillance programs. The proposal extends beyond facial recognition to include voice recognition and other biometric identifiers, reflecting broader privacy concerns about automated identification systems.
DPOs in organizations that contract with or share data with immigration authorities should monitor this legislative development. The bill’s emphasis on prohibiting the use of third-party biometric data in investigations could impact data sharing agreements and require review of existing contracts with government agencies that might access biometric information.
UK Watchdog Investigates X Over Grok AI Sexual Deepfakes
The UK’s Information Commissioner’s Office has opened a formal GDPR investigation into X and xAI following reports that the Grok AI tool generated non-consensual sexual deepfakes. The investigation focuses on whether appropriate safeguards were implemented and whether people’s image data was processed lawfully and transparently.
This case represents a significant test of GDPR’s application to AI-generated content using personal data without consent. The ICO is examining fundamental data protection principles including fairness, lawfulness, and transparency when personal images are used to create synthetic content. Potential fines could reach £17.5m or 4% of global turnover.
DPOs deploying AI systems that process images or generate synthetic content should ensure robust consent mechanisms, implement technical safeguards against misuse, and establish clear policies about acceptable use. The investigation emphasizes the importance of privacy-by-design principles in AI development and the need for ongoing monitoring of AI system outputs.
GDPR Deletion Requests: A System Under Strain
A detailed analysis of 20 GDPR deletion requests reveals systemic compliance failures, with 12 companies completely ignoring requests and only 2 complying immediately. Even companies advertising GDPR compliance, including EU-based organizations and charities, failed to honor legitimate deletion requests.
The investigation exposed enforcement challenges, with data protection authorities either unresponsive or claiming limited jurisdiction. One Czech company continued displaying user data months after claiming deletion, while regulatory bodies failed to take action despite clear violations.
This real-world testing highlights critical gaps between GDPR’s theoretical protections and practical enforcement. DPOs should audit their own deletion processes, ensure staff training on proper response procedures, and implement automated systems to track and fulfill data subject requests. The findings suggest many organizations treat GDPR compliance as a marketing claim rather than operational reality.
SCIENTIFIC RESEARCH
Selection of the most relevant papers of the week from arXiv on AI, Machine Learning and Privacy
Differential Privacy & Synthetic Data
Private PoEtry: Private In-Context Learning via Product of Experts - Addresses privacy challenges in large language model in-context learning, where training examples may contain sensitive information. Proposes a differential privacy approach using product of experts methodology to protect data while maintaining model utility. Critical for organizations using LLMs with confidential data. arXiv
Privacy Amplification Persists under Unlimited Synthetic Data Release - Demonstrates that differential privacy guarantees can be strengthened when releasing synthetic data instead of the underlying generative model. Extends privacy amplification theory beyond asymptotic regimes, providing practical guidance for synthetic data strategies in compliance frameworks. arXiv
Synthesizing Realistic Test Data without Breaking Privacy - Tackles the challenge of generating synthetic datasets that maintain statistical properties while preventing membership inference attacks. Uses GANs with enhanced privacy protections, addressing a key compliance need for realistic test data generation without compromising individual privacy. arXiv
Classification Under Local Differential Privacy with Model Reversal and Model Averaging - Reframes local differential privacy as a transfer learning problem to improve data utility while maintaining privacy guarantees. Introduces model reversal techniques that could enhance privacy-preserving analytics capabilities for organizations processing personal data under strict privacy requirements. arXiv
AI Security & Vulnerabilities
Learning to Inject: Automated Prompt Injection via Reinforcement Learning - Presents AutoInject, a reinforcement learning framework for generating transferable prompt injection attacks against LLM agents. Highlights critical vulnerabilities in AI systems that privacy professionals must address through robust security governance and risk assessment frameworks. arXiv
Are Open-Weight LLMs Ready for Social Media Moderation? - Evaluates open-source LLMs for harmful content detection on Bluesky, comparing their zero-shot capabilities against traditional ML approaches. Relevant for organizations considering open-weight models for content moderation while maintaining GDPR compliance and avoiding automated decision-making issues. arXiv
Biometric Privacy
SIDeR: Semantic Identity Decoupling for Unrestricted Face Privacy - Proposes a framework for decoupling facial identity information from visual representations during storage and transmission. Addresses critical biometric data protection requirements under GDPR Article 9, offering technical solutions for facial recognition systems in banking and identity verification. arXiv
Quantum Computing Privacy
Q-ShiftDP: A Differentially Private Parameter-Shift Rule for Quantum Machine Learning - Introduces the first privacy mechanism specifically designed for quantum machine learning, exploiting unique properties of quantum gradient estimation. As quantum computing advances, establishes privacy-preserving foundations for future quantum AI applications requiring regulatory compliance. arXiv
AI ACT IN PILLS - Part 6
Article 11 - Technical documentation
Part 6 - Article 11: Technical Documentation
After exploring prohibited AI practices in Part 5, we now turn to one of the most concrete obligations for AI system providers: the comprehensive technical documentation requirements outlined in Article 11. This provision establishes the foundation for demonstrating compliance with the AI Act’s requirements and serves as a critical accountability mechanism.
The Documentation Imperative
Article 11 mandates that providers of high-risk AI systems must draw up and maintain detailed technical documentation before placing their systems on the market or putting them into service. This documentation must be sufficiently comprehensive to enable national competent authorities to assess the system’s compliance with the regulation’s requirements.
The obligation applies to all providers of high-risk AI systems as defined in Article 6, encompassing everything from AI systems used in critical infrastructures to those deployed in education, employment, and law enforcement contexts. Importantly, this documentation must be kept up-to-date throughout the system’s lifecycle, creating an ongoing compliance obligation rather than a one-time requirement.
Essential Components of Technical Documentation
The regulation specifies that technical documentation must include a general description of the AI system, including its intended purpose, the persons or groups it is intended to serve, and the context in which it is intended to be used. This seemingly basic requirement actually demands careful consideration of use cases, target populations, and deployment scenarios.
The documentation must detail the elements of the AI system and the development process, including the methods and steps performed for development, validation, and testing. This includes information about the data governance and management practices, computational resources used, and the trade-offs made during development. For organizations, this means maintaining detailed records of technical decisions, model architectures, and development methodologies from the project’s inception.
Risk management documentation forms another crucial component. Providers must document their risk management system, including the identification and analysis of known and reasonably foreseeable risks, the estimation and evaluation of risks that may emerge during use, and the measures taken to address these risks. This creates a clear audit trail of how organizations have approached AI risk management in practice.
Data and Performance Requirements
Article 11 requires extensive documentation of training, validation, and testing datasets, including information about data governance and data management. This encompasses data collection methodologies, data quality measures, potential biases in datasets, and data preprocessing steps. For organizations working with sensitive or personal data, this documentation must also demonstrate compliance with data protection requirements.
The technical documentation must include detailed information about the AI system’s performance, including its accuracy, robustness, and cybersecurity measures. This includes performance metrics across different demographic groups and use conditions, creating accountability for fair and equitable AI system performance.
Practical Implementation Challenges
For organizations, Article 11 creates significant documentation burdens that must be planned for from the beginning of AI development projects. The requirement for comprehensive documentation means that ad-hoc or informal development processes must be formalized and systematized. This is particularly challenging for organizations that have previously relied on agile development methodologies without extensive documentation practices.
The ongoing maintenance requirement means organizations must establish processes for updating documentation as systems evolve, datasets change, or new risks are identified. This creates resource implications that extend well beyond initial system development.
Compliance and Enforcement Implications
While Article 11 itself doesn’t specify direct sanctions, failure to maintain adequate technical documentation undermines an organization’s ability to demonstrate compliance with other AI Act requirements. During regulatory inspections or investigations, inadequate documentation could compound other compliance failures and potentially increase penalties.
The technical documentation also serves as the foundation for conformity assessments and CE marking requirements for high-risk AI systems, making it essential for market access in the European Union.
Technical documentation represents more than a bureaucratic requirement—it embodies the AI Act’s commitment to transparency and accountability in AI development and deployment. Organizations must view documentation not as an afterthought, but as an integral part of responsible AI development practices.
Next week in Part 7, we’ll examine Article 12 and the critical requirements for record-keeping, focusing on logging obligations and traceability requirements that complement the technical documentation framework.
Events and Meetings
Register now for our conference on cross-regulatory cooperation in the EU (17 March) (published February 3, 2026)
EDPB | Info
Safer internet day 2026 (published February 10, 2026)
European Commission | Info
Apply AI Sectoral deep dive - Mobility, transport, and automotive (published February 10, 2026)
European Commission | Info
Data Act - webinar on draft Guidelines for reasonable compensation (published February 10, 2026)
European Commission | Info
Data takes flight: Navigating privacy at the airport (published February 12, 2026)
EDPS | Info
Happy Data Protection Day 2026!
EDPS | Info
Conclusion
The regulatory landscape is witnessing a profound shift toward psychological protection in digital spaces, with enforcement actions this week revealing how privacy law is evolving to address manipulation techniques that transcend traditional data processing concerns.
The European Commission’s preliminary findings against TikTok mark a watershed moment for the Digital Services Act, establishing “addictive design” as an enforceable legal standard. The Commission’s focus on infinite scrolling, autoplay features, and reward mechanisms signals a regulatory approach that recognizes psychological harm as a legitimate policy concern. This investigation represents the first concrete attempt to operationalize the concept of digital wellbeing through binding legal obligations, moving beyond aspirational guidelines toward measurable compliance requirements.
The convergence with emerging concerns about AI companions is particularly striking. Expert warnings about emotional dependency on chatbots echo the same psychological vulnerabilities the Commission identifies in TikTok’s design patterns. Both phenomena involve platforms optimizing for user retention through mechanisms that exploit cognitive biases and emotional responses. The regulatory challenge lies in distinguishing between legitimate engagement strategies and manipulative practices that undermine user autonomy.
Simultaneously, the EDPB’s focus on Binding Corporate Rules for major multinational corporations like Heineken, ABB, and AkzoNobel reveals the increasing sophistication of cross-border data governance frameworks. These opinions arrive at a moment when international data flows face unprecedented scrutiny, particularly given ongoing EU-US negotiations over traveler data sharing with border authorities. The tension between security imperatives and privacy rights is becoming more acute as governments seek broader surveillance capabilities while civil society pushes back against facial recognition and bulk data collection.
The UK’s investigation into Grok adds another dimension to this evolving landscape. The ICO’s concern about users “losing control of personal data” when their content is scraped for AI training connects directly to fundamental questions about consent and purpose limitation in the age of large language models. This investigation may establish important precedents for how data protection authorities worldwide approach AI systems that repurpose existing content without explicit authorization.
For compliance professionals, these developments demand a fundamental reassessment of risk frameworks. Traditional privacy impact assessments focused primarily on data processing activities, but regulators are now scrutinizing the psychological effects of platform design decisions. DPOs must consider whether user interface choices, recommendation algorithms, and engagement features could constitute unfair processing under GDPR Article 5. This requires new forms of interdisciplinary expertise, combining legal analysis with insights from behavioral psychology and user experience design.
The practical implications extend beyond platform operators to any organization deploying AI systems or designing user-facing technologies. The Commission’s TikTok investigation suggests that “dark patterns” in digital design may soon face the same regulatory scrutiny as misleading advertising or unfair contract terms. Organizations must audit not just what data they collect, but how their systems influence user behavior and decision-making processes.
The NIS2 directive’s application to small and medium enterprises adds another layer of complexity, requiring cost-effective cybersecurity strategies that many organizations are unprepared to implement. The directive’s broad scope means that companies previously outside the regulatory perimeter must now develop comprehensive security governance frameworks while managing resource constraints.
These regulatory developments also highlight the growing sophistication of enforcement coordination across jurisdictions. The EDPB’s work on corporate rules and the parallel investigations by different authorities suggest a more strategic approach to international regulatory cooperation. This coordination reduces the likelihood that companies can exploit regulatory arbitrage or forum shopping to avoid compliance obligations.
However, significant questions remain about the practical boundaries of these emerging standards. How will regulators distinguish between legitimate persuasive design and prohibited manipulation? What metrics will authorities use to assess psychological harm? How will small organizations implement compliance measures developed for global platforms?
The week’s developments suggest we are entering a new phase of digital regulation where psychological protection becomes as important as data protection. This evolution challenges both industry and regulators to develop more nuanced approaches to balancing innovation with user welfare, requiring unprecedented collaboration between technologists, behavioral scientists, and legal experts.
📧 Edited by Nicola Fabiano
Lawyer - Fabiano Law Firm
🌐 Studio Legale Fabiano: https://www.fabiano.law
🌐 Blog: https://www.nicfab.eu
🌐 DAPPREMO: www.dappremo.eu
