NicFab Newsletter
Issue 11 | March 10, 2026
Privacy, Data Protection, AI, and Cybersecurity
Welcome to issue 11 of the weekly newsletter dedicated to privacy, data protection, artificial intelligence, cybersecurity and ethics. Every Tuesday you will find a curated selection of the most relevant news from the previous week, with a focus on European regulatory developments, case law, enforcement and technological innovation.
In this issue
- ITALIAN DATA PROTECTION AUTHORITY
- EDPB - EUROPEAN DATA PROTECTION BOARD
- EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
- CNIL - FRENCH AUTHORITY
- EUROPEAN PARLIAMENT
- COUNCIL OF THE EUROPEAN UNION
- INTERNATIONAL DEVELOPMENTS
- ARTIFICIAL INTELLIGENCE
- CYBERSECURITY
- TECH & INNOVATION
- SCIENTIFIC RESEARCH
- AI Act in Pills
- Events and Meetings
- Conclusion
ITALIAN DATA PROTECTION AUTHORITY
Italian DPA Monitors “Forest Family” Case, Emphasizes Minor Protection
The Italian Data Protection Authority has issued a statement regarding the widely reported “forest family” case, emphasizing its commitment to monitoring situations involving minors’ data protection. The Authority expressed particular concern about cases that may expose children to significant media attention, highlighting the heightened vulnerability of minors in such circumstances.
The Garante has specifically called upon all media outlets to adhere to the principles of essential information and dignity protection for all parties involved. The Authority recommends maximum caution when disseminating information that could lead to the identification of minors, even indirectly. For DPOs working with organizations that handle minors’ data or engage in media activities, this serves as a crucial reminder of the heightened protection standards required under GDPR Article 8, particularly regarding data processing that could result in public exposure of children.
EDPB - EUROPEAN DATA PROTECTION BOARD
Stakeholder Event on Political Advertising: Agenda Available Now
The EDPB is hosting a remote stakeholder consultation on March 27, 2026, focusing on its draft guidelines for processing personal data in political advertising under the new EU regulation on transparency and targeting of political advertisements. This initiative represents a critical intersection of data protection law with democratic processes and electoral integrity.
For DPOs working with organizations involved in political campaigning, advertising technology, or social media platforms, this consultation offers valuable insight into emerging regulatory expectations. The guidelines will likely establish strict parameters for consent mechanisms, data minimization, and transparency requirements in political contexts. Organizations should monitor the outcomes closely, as political advertising regulations often serve as precedents for broader advertising and marketing practices.
The event reflects the EDPB’s commitment to stakeholder engagement outlined in the Helsinki statement, suggesting a more collaborative approach to regulatory development that DPOs should actively participate in when relevant to their sectors.
Conference on Cross-Regulatory Cooperation in the EU
The EDPB’s March 17, 2026 conference on cross-regulatory cooperation highlights the increasingly complex landscape of overlapping EU regulations affecting data protection. This high-level event will examine how frameworks like the Digital Services Act, AI Act, and sectoral regulations interact with GDPR, and how regulatory authorities coordinate enforcement efforts.
For DPOs, this conference signals the importance of adopting a holistic compliance approach rather than viewing GDPR in isolation. Understanding regulatory interplay is becoming essential as organizations face obligations under multiple frameworks simultaneously. The focus on inter-authority cooperation also suggests more coordinated enforcement actions may emerge, requiring DPOs to consider compliance risks across multiple regulatory domains.
While registration has closed, the livestream availability demonstrates the EDPB’s commitment to transparency in its strategic thinking about the evolving regulatory ecosystem.
Data Brokers Market Study
The EDPB has published a comprehensive market study on data brokers, providing methodology for identifying these entities and analyzing their business models and associated risks. Completed through the Support Pool of Experts program at the Belgian DPA’s request, the study offers both a typology framework and detailed analysis of Belgian data broker landscape.
This research provides DPOs with valuable tools for conducting vendor due diligence and mapping data flows involving third-party data providers. The risk assessment framework could serve as a template for organizations evaluating partnerships with data brokers or similar entities. The study’s findings reveal a highly diverse market with varying risk levels, emphasizing the need for nuanced compliance approaches.
The methodology and typology presented could prove influential beyond Belgium, as other supervisory authorities may adopt similar frameworks for market surveillance. DPOs should review the study’s criteria to assess whether their organizations’ activities might be categorized as data brokerage under evolving regulatory interpretations.
EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
TechDispatch Talks - Digital Identity Wallets
The European Data Protection Supervisor has launched a timely podcast episode exploring Digital Identity Wallets as EU Member States prepare to roll out official European Digital Identity Wallets this year. The discussion covers how these wallets could revolutionize identity verification by allowing users to store credentials like IDs, licenses, and diplomas in a single secure mobile application, sharing only necessary information for each transaction.
For DPOs, this development represents both opportunity and challenge. While Digital Identity Wallets promise to reduce data oversharing and limit profiling by implementing data minimization principles, they also introduce new privacy and security considerations. Organizations will need to understand how to integrate with these systems while maintaining GDPR compliance, particularly regarding data processing transparency and user consent mechanisms.
New Episode on Digital Identity Wallets Available
The EDPS has announced the release of their latest TechDispatch Talks episode focusing on Digital Identity Wallets and their potential to transform digital and physical identity verification processes. The episode examines whether these technological solutions can effectively reduce data oversharing and profiling while addressing inherent privacy and security risks.
This announcement signals increasing regulatory attention to digital identity infrastructure. DPOs should prepare for the implications of widespread Digital Identity Wallet adoption, including potential changes to identity verification workflows, data processing agreements, and privacy impact assessments. Understanding these systems early will be crucial for organizations planning to integrate digital identity solutions into their operations.
CNIL - FRENCH AUTHORITY
KASPR Injunction Closure Following Compliance
The CNIL closed its injunction against KASPR on March 4, 2026, following the company’s compliance efforts after a €240,000 fine in December 2024. The original violations included collecting LinkedIn user data despite privacy settings limiting profile visibility, retaining data for five years with each update, and providing inadequate transparency to users.
KASPR’s violations highlight critical compliance challenges for B2B data providers operating across platforms. The case demonstrates how social media scraping practices can violate GDPR’s lawfulness and transparency principles, particularly when users have explicitly restricted data visibility. DPOs should review third-party data acquisition practices and ensure proper legal bases exist before collecting publicly available but restricted information.
The closure suggests KASPR successfully addressed the CNIL’s concerns, though specific remediation measures weren’t detailed. This case reinforces the importance of multilingual transparency obligations and proper data subject rights implementation for companies operating internationally.
AI in Healthcare: Joint CNIL-HAS Public Consultation
The CNIL and France’s Health Authority (HAS) launched a joint public consultation on March 5, 2026, regarding a guidance document on AI systems in healthcare contexts. This collaborative approach signals the authorities’ recognition that AI deployment in healthcare requires both data protection and medical safety expertise working in tandem.
The consultation demonstrates the CNIL’s proactive stance on AI governance, particularly in sensitive sectors like healthcare where algorithmic decisions can significantly impact patient outcomes. DPOs in healthcare organizations should monitor this guidance development, as it will likely establish best practices for AI system implementation while ensuring GDPR compliance.
This initiative reflects the growing need for sector-specific AI guidance that balances innovation with protection of fundamental rights. Healthcare DPOs should prepare for enhanced requirements around AI transparency, algorithmic auditing, and patient consent mechanisms in AI-assisted medical decision-making.
Data Protection Framework for Public Consultation Process
The CNIL published details of its joint data processing arrangement with HAS for managing the AI healthcare consultation, demonstrating transparency in its own data handling practices. The processing covers collecting contributions, contacting participants for follow-up discussions, and generating anonymous statistics about submissions received.
This disclosure exemplifies best practices for public sector data processing, showing how Article 6(1)(e) GDPR supports legitimate public authority functions. The CNIL’s approach provides a template for organizations conducting consultations, emphasizing purpose limitation and transparency even in administrative contexts.
DPOs managing public consultation processes should note the clear articulation of processing purposes and legal bases. The framework shows how organizations can balance stakeholder engagement needs with data minimization principles while maintaining audit trails for regulatory activities.
2026 Update to Information Technology and Freedom Tables
The CNIL released its 2026 update to the “Tables Informatique et Libertés,” a comprehensive reference document indexing privacy law decisions and guidance. These tables serve as essential navigation tools for practitioners seeking relevant precedents and regulatory positions on specific data protection issues.
Regular updates to these reference materials reflect the dynamic nature of data protection law and the CNIL’s commitment to providing accessible guidance. The tables help DPOs identify applicable precedents and understand the authority’s evolving interpretation of privacy principles across different contexts and technologies.
DPOs should incorporate these updated tables into their compliance reference libraries, particularly when assessing novel processing activities or responding to complex data subject requests. The systematic indexing approach facilitates efficient research and helps ensure consistent application of established principles to new situations.
EUROPEAN PARLIAMENT
Press Conference on AI Copyright Protection
The European Parliament is advancing comprehensive copyright protection measures for the AI era, with rapporteur Axel Voss set to brief journalists following a crucial plenary vote. The proposed legislation would require AI providers to clearly acknowledge copyrighted content used in training datasets and ensure fair compensation for rights holders. Significantly, the draft empowers content creators to actively prevent their protected works from being used in AI training processes.
For DPOs, this development signals a convergence of copyright and data protection concerns in AI governance. The emphasis on EU jurisdiction for AI training using European content suggests territorial data processing principles may extend to intellectual property domains. Organizations deploying AI systems should prepare for enhanced compliance obligations around content sourcing, documentation requirements, and potentially complex licensing frameworks that bridge traditional copyright and emerging AI regulatory landscapes.
EU-UK Digital Cooperation Framework
Post-Brexit digital cooperation between the EU and UK continues through structured dialogue covering AI governance, cybersecurity, disinformation countermeasures, and platform regulation. This collaboration primarily involves information exchange, regulatory updates, and coordinated international advocacy rather than formal regulatory alignment.
The ongoing cooperation suggests pragmatic recognition that digital challenges transcend political boundaries, particularly in cybersecurity and AI governance. For DPOs managing cross-border data flows, this framework may facilitate continued regulatory dialogue and potentially smoother adequacy arrangements. However, the informal nature of current cooperation means organizations should not assume automatic regulatory convergence and must continue monitoring divergent developments in both jurisdictions’ data protection and digital regulatory approaches.
Digital Rights and Gender Equality
The European Parliament’s upcoming inter-parliamentary meeting addresses the intersection of women’s rights and digital technologies, focusing on combating online stereotypes, disinformation, and digital violence. The briefing emphasizes how AI and digital platforms can simultaneously advance gender equality through improved access and representation while exacerbating risks through algorithmic bias and targeted harassment.
This initiative highlights the growing recognition that data protection and digital rights frameworks must incorporate intersectional approaches to truly protect vulnerable groups. For DPOs, this signals potential expansion of compliance considerations beyond traditional privacy metrics to include bias assessment, harassment prevention, and inclusive design principles. Organizations should consider how their data processing activities might disproportionately impact women and marginalized groups, particularly in AI-driven decision-making systems.
COUNCIL OF THE EUROPEAN UNION
Council agrees position to streamline rules on biocides
The Council of the European Union has adopted a position to simplify the regulatory framework governing biocides across member states. The key change involves extending the data protection period for companies that submit scientific studies and research data when seeking approval for biocidal products. This streamlining effort aims to reduce administrative burden while maintaining safety standards for products used to control harmful organisms.
For Data Protection Officers, this development signals potential changes in how personal data collected during biocide research and approval processes will be handled. While the focus appears to be on extending protection for proprietary research data rather than personal data per se, DPOs should monitor the final legislation to understand any implications for data retention periods and processing activities within organizations involved in biocide development, testing, or regulatory compliance.
The Council’s position represents an important step toward harmonizing biocide regulations across the EU, though the final rules will depend on ongoing negotiations with the European Parliament.
INTERNATIONAL DEVELOPMENTS
Germany’s Privacy Watchdog Sidelined as Intelligence Powers Expand
Germany’s data protection authority faces a significant setback as courts deny its oversight of intelligence activities, just as the country prepares to grant sweeping new powers to its spy agencies. The Federal Commissioner for Data Protection warned that “citizens have virtually no means of defending themselves against intelligence measures that can deeply intrude on their privacy” after losing a legal challenge against the BND intelligence service.
This development represents a historic shift for Germany, breaking from decades of strict intelligence constraints rooted in post-WWII protections against surveillance abuse. With the U.S. potentially reducing intelligence sharing under Trump, European nations are strengthening domestic capabilities, but Germany’s move creates concerning oversight gaps.
For DPOs, this signals a troubling trend where national security interests override privacy protections, potentially setting precedent for reduced regulatory authority over intelligence data processing across Europe.
FTC Contradicts Itself on Age Verification and Children’s Privacy
The U.S. Federal Trade Commission has acknowledged that age verification systems violate children’s privacy laws while simultaneously choosing to ignore this contradiction. This paradoxical stance highlights the impossible position regulators face when trying to balance child safety legislation with fundamental privacy protections.
Age verification mechanisms inherently require extensive data collection and processing, creating the very privacy risks that children’s protection laws aim to prevent. The FTC’s admission reveals the regulatory incoherence plaguing child safety initiatives, where protecting children from content exposure potentially exposes them to greater privacy harms from data harvesting.
DPOs should prepare for similar regulatory contradictions in their jurisdictions as lawmakers push child safety measures without considering privacy implications, requiring careful navigation between competing compliance obligations.
UK ICO Investigates Meta’s Smart Glasses After Privacy Breach Reports
The UK’s Information Commissioner’s Office is contacting Meta following revelations that human contractors reviewing AI training data from Ray-Ban smart glasses accessed highly intimate footage captured by unsuspecting users. Workers in Kenya reportedly viewed everything from private conversations to people dressing and using toilets, with one contractor stating “we see everything.”
This investigation raises critical questions about cross-border data transfers under GDPR, particularly whether Meta’s safeguards for processing EU citizens’ data in Kenya meet regulatory requirements. The incident demonstrates how AI training practices can create unexpected privacy violations when wearable devices capture far more than users intend to share.
DPOs should audit their AI training processes and contractor oversight mechanisms, ensuring clear boundaries around human review of user-generated content and robust data minimization practices for wearable device data.
UK Warns of Heightened Iranian Cyberattack Risks
The UK’s National Cyber Security Centre has alerted British organizations to increased Iranian cyberattack risks amid escalating Middle East tensions. While the direct threat to UK entities remains unchanged, organizations with Middle Eastern presence or supply chains face elevated risks, particularly as Iranian state-sponsored groups maintain attack capabilities despite domestic internet blackouts.
The NCSC emphasizes that geopolitical conflicts can rapidly shift cyber threat landscapes, requiring constant vigilance from organizations with international exposure. This warning follows similar alerts from U.S. agencies about Iranian targeting of critical infrastructure.
DPOs at organizations with Middle Eastern connections should review incident response procedures, enhance monitoring of external attack surfaces, and ensure data breach notification processes are ready for potential state-sponsored attacks that could compromise personal data across international operations.
ARTIFICIAL INTELLIGENCE
Red Lines under the EU AI Act: Unpacking Social Scoring as a Prohibited AI Practice
The EU AI Act’s Article 5 establishes clear prohibitions on AI-enabled social scoring systems that assess individuals based on social behavior or personal traits across multiple contexts. This prohibition applies broadly to both public and private sectors, creating significant compliance obligations for organizations deploying AI systems that could potentially fall under this definition.
For DPOs, this development reinforces existing GDPR obligations around profiling and automated decision-making while introducing new specific constraints. The regulation targets practices that lead to unfair treatment, particularly when assessment data spans unrelated social contexts or results in disproportionate responses to the behavior evaluated.
Importantly, legitimate business practices like creditworthiness assessments, insurance risk scoring, and fraud detection remain permissible, provided they comply with proportionality requirements and relevant legislation. DPOs should conduct thorough assessments of existing AI systems to ensure compliance with both the AI Act’s social scoring prohibitions and underlying GDPR requirements.
Show HN: Open-Source Article 12 Logging Infrastructure for the EU AI Act
A new open-source TypeScript library addresses the EU AI Act’s Article 12 requirements, which mandate automatic event recording and six-month retention for high-risk AI systems starting August 2026. The library creates append-only logs with hash-chaining for tamper detection, addressing the gap between standard application logging and regulatory requirements for reconstructing AI decision-making processes.
This development highlights a critical compliance challenge for DPOs: demonstrating that AI decision logs haven’t been altered months after the fact. The regulatory requirement appears to demand ledger-like integrity rather than conventional logging, creating new technical and procedural obligations for organizations deploying high-risk AI systems.
While the current solution offers tamper-evidence rather than tamper-proofing, it represents a practical approach to Article 12 compliance. DPOs should evaluate whether existing logging infrastructure meets these emerging audit trail requirements and consider implementing similar immutable logging mechanisms before the August deadline.
CYBERSECURITY
Recent Cisco Catalyst SD-WAN Vulnerability Now Widely Exploited
The cybersecurity landscape continues to face significant challenges as CVE-2026-20127, a critical vulnerability in Cisco Catalyst SD-WAN systems, has moved from discovery to active widespread exploitation. WatchTowr reports observing exploitation attempts from numerous unique IP addresses, indicating coordinated efforts by threat actors to compromise vulnerable systems.
This development underscores the critical importance of rapid patch deployment in enterprise environments. For DPOs, this represents a potential data breach scenario where network infrastructure compromises could lead to unauthorized access to personal data flowing through these systems. Organizations utilizing Cisco SD-WAN solutions should prioritize immediate patching and conduct thorough security assessments to ensure no unauthorized access has occurred.
Hikvision and Rockwell Automation CVSS 9.8 Flaws Added to CISA KEV Catalog
CISA has elevated two critical vulnerabilities to its Known Exploited Vulnerabilities catalog, both carrying maximum CVSS scores of 9.8. CVE-2017-7921 affects Hikvision cameras with improper authentication mechanisms, while CVE-2021-22681 impacts Rockwell Automation industrial control systems with credential protection failures.
The inclusion of these vulnerabilities in the KEV catalog indicates active exploitation in the wild, particularly concerning for organizations operating critical infrastructure. DPOs should be especially vigilant about the Hikvision vulnerability, as security cameras often process personal data including biometric information and location data. Federal agencies have until March 26, 2026, to implement fixes, but all organizations should prioritize remediation regardless of sector.
Cognizant TriZetto Breach Exposes Health Data of 3.4 Million Patients
TriZetto Provider Solutions has disclosed a significant data breach affecting over 3.4 million individuals, with unauthorized access occurring for nearly a year before detection. The breach exposed sensitive health information including names, addresses, Social Security numbers, and insurance details through compromised insurance eligibility verification systems.
This incident highlights critical gaps in breach detection capabilities and notification timelines. The delay between initial compromise (November 2024) and public notification (February 2026) raises serious GDPR compliance concerns for any EU data subjects affected. DPOs should evaluate their own healthcare data processing arrangements and ensure robust monitoring systems are in place to detect unauthorized access promptly, as delayed detection significantly amplifies regulatory and reputational risks.
FBI Investigating ‘Suspicious’ Cyber Activity on Surveillance System
The FBI has confirmed it is investigating suspicious cyber activity on a system containing sensitive surveillance information, according to congressional notifications. While details remain limited, the incident involves systems that likely process significant amounts of personal data collected through federal surveillance activities.
This development raises important questions about the security of government surveillance infrastructure and the protection of citizen data held by law enforcement agencies. For DPOs, this incident serves as a reminder that even the most security-conscious organizations can face sophisticated threats. It emphasizes the need for robust security measures when processing personal data, particularly in arrangements involving law enforcement or national security contexts where data sensitivity is exceptionally high.
Cisco Confirms Active Exploitation of Two Catalyst SD-WAN Manager Vulnerabilities
Cisco has confirmed active exploitation of two additional vulnerabilities in its Catalyst SD-WAN Manager platform: CVE-2026-20122 (arbitrary file overwrite) and CVE-2026-20128 (information disclosure). WatchTowr reports observing threat actors deploying web shells, with the highest activity spike occurring on March 4th across global regions.
The combination of file overwrite capabilities and privilege escalation creates a particularly dangerous attack vector for threat actors seeking persistent access to network infrastructure. Organizations should assume any exposed systems are compromised until proven otherwise. DPOs must ensure incident response plans account for infrastructure-level breaches that could affect all data processing activities, requiring comprehensive impact assessments and potentially triggering breach notification obligations across multiple jurisdictions.
TECH & INNOVATION
TikTok Battles European Regulators Over China Data Flows
TikTok has launched a major court challenge in Dublin against Ireland’s Data Protection Commission, fighting a €530 million fine and demands to cut off data flows to China. The case centers on whether the Chinese-owned platform can continue transferring European users’ personal data to its parent company ByteDance, given Beijing’s surveillance laws.
The Irish regulator found that TikTok failed to adequately protect Europeans’ data from potential Chinese government access. If TikTok loses, it may need to completely sever its China operations for European users, at an estimated cost of €5 billion. This landmark case will test GDPR’s effectiveness against authoritarian data access laws and could set precedent for other Chinese-owned platforms operating in Europe.
For DPOs, this highlights the critical importance of robust data transfer impact assessments when dealing with third countries lacking adequate data protection frameworks.
Meta Faces Privacy Storm Over Smart Glasses Data Review
A Swedish investigation revealed that workers at Meta’s Kenyan subcontractor Sama have been reviewing footage from Ray-Ban Meta smart glasses, including intimate content such as nudity and bathroom usage. Despite Meta’s claims of face-blurring protections, sources disputed the consistency of these privacy safeguards.
The revelations have triggered regulatory scrutiny from the UK’s Information Commissioner’s Office and a class action lawsuit in the US. While Meta confirmed it uses contractors to review content shared with Meta AI to improve services, users appear largely unaware their intimate moments could be viewed by overseas workers. The privacy policy mentions cloud processing but may not adequately convey the human review element.
This case underscores the need for transparent consent mechanisms and clear data processing disclosures, particularly for wearable devices capturing sensitive personal moments.
Legal Action Escalates Against Meta’s Smart Glasses Privacy Practices
Following the Swedish investigation, Meta now faces a US class action lawsuit alleging false advertising and privacy violations over its Ray-Ban smart glasses. Plaintiffs argue that Meta’s marketing promises of privacy-focused design misled consumers about human review of their footage, including intimate content.
The lawsuit, filed by Clarkson Law Firm representing New Jersey and California residents, highlights the scale of the issue with over seven million glasses sold in 2025. Users cannot opt out of the data pipeline that feeds their footage to overseas reviewers. While Meta points to its privacy policy and terms of service, critics argue these disclosures are inadequately prominent given the sensitive nature of wearable camera data.
This case emphasizes the critical importance of clear, prominent privacy notices and meaningful consent for invasive data processing activities.
Italian Journalist Hack Confirmed Using Paragon Spyware
Italian prosecutors confirmed that journalist Francesco Cancellato and two activists were successfully hacked with Paragon spyware in coordinated attacks on December 14, 2024. While intelligence agency AISI lawfully targeted the activists, no evidence links the agency to Cancellato’s hack, raising questions about unauthorized surveillance.
The case emerged after WhatsApp alerted around 90 individuals, including journalists and civil society members, of targeting by Paragon Solutions’ spyware. Technical analysis revealed successful infections, marking the first independent confirmation of the journalist’s compromise. The investigation continues to identify Cancellato’s attackers, with Italy’s far-right government denying involvement but offering limited cooperation.
This incident highlights the growing threat of commercial spyware against journalists and civil society, emphasizing the need for robust cybersecurity measures and regulatory oversight of surveillance technologies.
Data Brokers Harvest AI Chatbot Conversations Through Browser Extensions
Browser extensions disguised as VPN services or ad blockers are secretly intercepting users’ AI chatbot conversations and selling the data to brokers. Research revealed that these extensions override browser functions to capture ChatGPT, Gemini, and other AI service interactions, including highly sensitive personal information.
Despite claims of anonymization, conversations contain real names, medical details, and diagnosis codes. One researcher accessed nearly 500 unique prompts from over 435 users across sensitive categories including depression, substance abuse, and financial information. The data is stored verbatim and made searchable through APIs sold to customers.
This practice exploits users’ trust in “free” browser extensions while potentially violating privacy laws. DPOs should audit browser extensions in organizational environments and educate users about the privacy risks of seemingly benign browser tools when handling sensitive information.
SCIENTIFIC RESEARCH
Selection of the most relevant papers of the week from arXiv on AI, Machine Learning and Privacy
Privacy-Preserving AI Systems
A Late-Fusion Multimodal AI Framework for Privacy-Preserving Deduplication in National Healthcare Data Environments
This framework addresses duplicate record challenges in healthcare environments while maintaining GDPR and HIPAA compliance. The approach eliminates reliance on direct identifiers like names or SSNs, offering a privacy-by-design solution for data quality management. Critical for healthcare DPOs managing patient data deduplication without compromising regulatory compliance.
arXiv
Privacy-Aware Camera 2.0 Technical Report
Introduces advanced privacy-preserving surveillance technology for sensitive environments, addressing the privacy-security paradox in visual monitoring systems. The solution provides mathematically provable irreversibility while maintaining semantic understanding, surpassing traditional obfuscation methods. Essential for compliance officers evaluating surveillance technologies in privacy-sensitive contexts.
arXiv
AI Safety and Alignment
Why Is RLHF Alignment Shallow? A Gradient Analysis
Provides mathematical analysis of why safety alignment in LLMs remains superficial, demonstrating that gradient-based alignment concentrates only on positions where harm is determined. The research reveals fundamental limitations in current alignment approaches, offering insights for developing more robust safety measures in AI systems.
arXiv
ThaiSafetyBench: Assessing Language Model Safety in Thai Cultural Contexts
Introduces culturally-specific safety evaluation for LLMs beyond English-centric assessments. The benchmark comprises 1,954 Thai malicious prompts addressing cultural and linguistic risks often overlooked in global AI safety frameworks. Valuable for organizations deploying AI systems across diverse cultural contexts requiring localized compliance strategies.
arXiv
AI Governance and Accountability
Code Fingerprints: Disentangled Attribution of LLM-Generated Code
Addresses software governance challenges by enabling identification and attribution of LLM-generated code for vulnerability management and incident investigation. The technique supports compliance frameworks requiring traceability and accountability in automated code generation environments. Crucial for organizations managing AI-generated intellectual property and security risks.
arXiv
Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking
Provides systematic framework for translating jailbreak research into executable security assessments through multi-agent workflows. The platform addresses benchmark drift issues, enabling consistent robustness evaluation across different AI systems. Essential tool for security teams conducting comprehensive AI vulnerability assessments.
arXiv
AI Trustworthiness and Detection
Detecting Hallucinations in Authentic LLM-Human Interactions
Focuses on hallucination detection in real-world LLM deployments, moving beyond artificial benchmarks to genuine human-AI dialogues. Critical for sensitive domains like healthcare and legal services where accuracy is paramount. The research provides practical frameworks for implementing hallucination monitoring in production environments.
arXiv
Censored LLMs as a Natural Testbed for Secret Knowledge Elicitation
Explores methods for detecting dishonesty and eliciting truthful responses from language models using naturally occurring scenarios rather than artificial constructions. The research provides insights into model transparency and reliability assessment, supporting due diligence processes for AI system deployment in regulated environments.
arXiv
AI ACT IN PILLS - Part 11
Article 15 - Accuracy, robustness and cybersecurity
After examining Article 14 and the crucial requirement for human oversight in the last episode, we continue our journey through the AI Act by analyzing Article 15, which sets out the fundamental requirements to ensure that high-risk AI systems meet rigorous technical standards in terms of accuracy, robustness, and cybersecurity.
Core Requirements for System Performance
Article 15 mandates that high-risk AI systems must achieve “an appropriate level of accuracy, robustness and cybersecurity” throughout their lifecycle. This seemingly straightforward requirement carries profound implications for how AI systems are designed, developed, and maintained. The regulation doesn’t define specific numerical thresholds universally, recognizing that accuracy requirements vary significantly across applications – a medical diagnostic system demands different precision levels than a content recommendation engine.
Accuracy requirements extend beyond initial deployment metrics. Providers must ensure their systems maintain performance standards over time, accounting for data drift, changing operational conditions, and evolving user behaviors. This creates ongoing obligations rather than one-time compliance checks. For instance, a hiring algorithm must demonstrate consistent fair performance across different candidate pools and maintain this accuracy as job markets and recruitment practices evolve.
Robustness as a Fundamental Design Principle
The robustness requirement addresses system resilience against various challenges, including input variations, unexpected scenarios, and adversarial attacks. AI systems must perform reliably even when encountering data that differs from their training sets or when facing deliberate attempts to manipulate their outputs.
Consider a autonomous vehicle navigation system: robustness means maintaining safe operation during unusual weather conditions, unexpected road obstacles, or when sensors provide incomplete information. Similarly, a financial fraud detection system must remain effective against new fraud techniques while avoiding excessive false positives that could disrupt legitimate transactions.
This requirement forces organizations to implement comprehensive testing regimes, including stress testing, adversarial testing, and scenario analysis that goes well beyond standard software validation procedures.
Cybersecurity Integration from Design Phase
Article 15’s cybersecurity mandate requires protection against digital threats throughout the AI system’s lifecycle. This encompasses traditional cybersecurity concerns like unauthorized access and data breaches, but extends to AI-specific vulnerabilities such as model extraction, membership inference attacks, and poisoning of training data.
Organizations must implement security measures appropriate to the system’s risk level and operational context. A healthcare AI system processing patient data requires more stringent protections than a system optimizing energy consumption in buildings. The regulation emphasizes “security by design,” meaning cybersecurity considerations must be integrated from initial system conception rather than added as an afterthought.
Practical Implementation Challenges
These technical standards create significant compliance obligations for AI providers. Organizations must establish robust quality management systems capable of continuously monitoring and validating system performance. This includes implementing automated monitoring tools, establishing clear performance benchmarks, and creating rapid response procedures when systems fall below acceptable standards.
Documentation requirements are equally demanding. Providers must maintain detailed records of testing procedures, performance metrics, and remediation actions. This documentation serves both internal quality assurance purposes and regulatory compliance needs, particularly during market surveillance activities by competent authorities.
Enforcement and Accountability
Non-compliance with Article 15’s technical standards can trigger the AI Act’s penalty framework, with fines reaching up to 4% of global annual turnover. However, the more immediate concern for most organizations is the potential for enforcement actions that could suspend system operations or require market withdrawal until compliance is achieved.
The regulation’s approach creates shared responsibility between providers and users. While providers bear primary responsibility for ensuring technical compliance, deployers must implement appropriate monitoring and maintenance procedures to preserve system performance in operational environments.
Looking Forward
Article 15’s technical requirements establish the foundation for trustworthy AI deployment, but implementation success depends heavily on how providers structure their operational responsibilities. Our next installment will examine Article 16, which details the primary obligations of high-risk AI system providers, exploring how organizations must organize their governance, quality management, and accountability structures to ensure comprehensive compliance with the AI Act’s demanding framework.
Events and Meetings
Conference on cross-regulatory cooperation in the EU (17 March) - Programme available now (published March 3, 2026)
EDPB | Info
Stakeholder event on political advertising: agenda available now (published March 6, 2026)
EDPB | Info
117th Plenary meeting (published March 18, 2026)
EDPB | Info
Apply AI webinars sectoral deep dive - Agrifood, climate & environment (published March 19, 2026)
European Commission | Info
AI for 3D Digital Twins in Cultural Heritage: Stakeholder Forum (published March 23, 2026)
European Commission | Info
Blog post: Advancing into Practice: Third Meeting of the AI Act Correspondents Network
EDPS | Info
Happy Data Protection Day 2026!
EDPS | Info
Commission holds first meeting of Special Panel on child safety online
European Commission | Info
European Union endorses Leaders’ Declaration at AI Summit in India
European Commission | Info
Executive Vice-President Virkkunen in India for summit on artificial intelligence
European Commission | Info
Conclusion
Regulatory authorities across Europe are demonstrating a notable shift from reactive enforcement to proactive sector-specific guidance, signaling a maturation in how data protection intersects with emerging technologies. This evolution becomes particularly evident when examining the coordinated approaches emerging around AI governance, digital identity frameworks, and cross-border enforcement mechanisms.
The Italian Data Protection Authority’s intervention regarding media coverage of vulnerable families represents more than routine child protection enforcement. It exemplifies how regulators are expanding their remit beyond traditional data processing violations to encompass broader societal harms amplified by digital media ecosystems. This proactive stance mirrors similar developments we observed in previous weeks with the Amazon worker surveillance case, suggesting a pattern where authorities are increasingly willing to tackle systemic privacy violations that extend beyond individual consent frameworks.
CNIL’s closure of enforcement proceedings against KASPR offers instructive parallels to this trend. Rather than pursuing prolonged adversarial proceedings, French authorities appear to favor collaborative resolution mechanisms that achieve substantive compliance outcomes. This pragmatic approach becomes particularly relevant when contrasted with the joint CNIL-HAS consultation on AI in healthcare applications. The collaboration between data protection and health authorities represents a sophisticated recognition that AI governance cannot be effectively managed through siloed regulatory approaches.
The EDPB’s concurrent focus on political advertising oversight and data broker market studies reveals another dimension of this coordination challenge. Political advertising regulations intersect with fundamental rights protections, commercial data practices, and democratic governance principles simultaneously. The timing of these initiatives alongside the European Parliament’s copyright protection discussions suggests regulatory bodies are grappling with how to prevent AI governance from fracturing across multiple, potentially conflicting frameworks.
Digital identity wallets present perhaps the most complex coordination challenge highlighted this week. The EDPS’s dedicated focus on this technology through multiple communication channels indicates recognition that digital identity systems will fundamentally reshape how privacy rights are exercised across both public and private sector interactions. The implications extend far beyond technical implementation to questions of citizen autonomy, state surveillance capabilities, and commercial data exploitation.
The international dimension adds further complexity, particularly regarding the paradox of age verification systems. The FTC’s acknowledgment that age verification mechanisms inherently violate children’s privacy while simultaneously being positioned as child protection tools exposes a fundamental tension in regulatory approaches. This contradiction becomes more acute when considering Meta’s smart glasses privacy violations, where the distinction between public and private spaces becomes increasingly meaningless in augmented reality contexts.
From a practical perspective, these developments create significant compliance challenges for multinational organizations. The emergence of sector-specific AI guidance means that healthcare providers, financial institutions, and media companies must now navigate overlapping regulatory frameworks that may impose conflicting requirements. DPOs must develop expertise not only in data protection principles but also in sector-specific regulations, copyright law, and emerging AI governance standards.
The cybersecurity incidents affecting Cisco, Hikvision, and Cognizant’s healthcare systems underscore how technical vulnerabilities can instantly transform routine data processing activities into major privacy violations affecting millions of individuals. The scale of the Cognizant breach impacting 3.4 million patients demonstrates how interconnected digital healthcare systems amplify both efficiency gains and privacy risks simultaneously.
Looking forward, the planned cross-regulatory cooperation conference signals recognition that current fragmented approaches may prove inadequate for governing increasingly complex digital ecosystems. However, this coordination effort faces the challenge of reconciling fundamentally different regulatory philosophies across privacy, competition, intellectual property, and sectoral domains.
The most pressing question emerging from these developments concerns whether European regulatory coordination can maintain pace with technological innovation while preserving fundamental rights protections. TikTok’s legal battle to maintain Chinese ownership ties illustrates how geopolitical considerations increasingly complicate technical regulatory solutions. As digital identity systems, AI applications, and cross-border data flows become more pervasive, the window for establishing coherent governance frameworks may be narrowing rapidly.
Will the current trend toward collaborative, sector-specific guidance prove sufficient to address the systemic challenges posed by AI-driven digital ecosystems, or are we witnessing the early stages of regulatory fragmentation that could undermine effective privacy protection?
📧 Edited by Nicola Fabiano
Lawyer - Fabiano Law Firm
🌐 Studio Legale Fabiano: https://www.fabiano.law
🌐 Blog: https://www.nicfab.eu
🌐 DAPPREMO: www.dappremo.eu
