NicFab Newsletter
Issue 12 | March 17, 2026
Privacy, Data Protection, AI, and Cybersecurity
Welcome to Issue 12 of our weekly newsletter dedicated to privacy, data protection, artificial intelligence, cybersecurity, and ethics. Every Tuesday, you’ll find a curated selection of the most relevant news from the previous week, with a focus on European regulatory developments, case law, enforcement, and technological innovation.
In this issue
- ITALIAN DATA PROTECTION AUTHORITY
- EDPB - EUROPEAN DATA PROTECTION BOARD
- EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
- EUROPEAN COMMISSION
- CNIL - FRENCH DATA PROTECTION AUTHORITY
- EUROPEAN PARLIAMENT
- DIGITAL MARKETS & PLATFORM REGULATION
- INTERNATIONAL DEVELOPMENTS
- ARTIFICIAL INTELLIGENCE
- CYBERSECURITY
- TECH & INNOVATION
- SCIENTIFIC RESEARCH
- Column – AI Act in a Nutshell | Part 12 – Article 16
- From the NicFab Blog
- Featured Events and Meetings
- Conclusion
ITALIAN DATA PROTECTION AUTHORITY
# Data Protection Authority fines Intesa Sanpaolo €17.6 million
The Data Protection Authority imposed a record fine of €17.6 million on Intesa Sanpaolo for unilaterally transferring 2.4 million customers to its subsidiary Isybank through unlawful profiling. The bank selected customers under 65 who used digital channels and had, without an adequate legal basis.
The operation involved significant contractual changes (a new IBAN and access only via the app) that were inadequately communicated during the summer. For DPOs, the case highlights the importance of carefully assessing the legal basis for profiling operations and ensuring transparent communications regarding extraordinary processing activities that customers could not reasonably foresee.
Acea Energia fined €2 million: fraudulent door-to-door contracts
The Data Protection Authority imposed a heavy fine on Acea Energia for serious violations in the management of over 1, 200 contracts activated without customers’ knowledge. The investigation revealed a flawed customer acquisition system in which door-to-door agents fraudulently obtained documents and activated service contracts with forged signatures.
The main issue concerns inadequate oversight of business partners: Acea failed to implement sufficient controls to prevent the misuse of documents obtained by agents. The recall system, designed to verify contractual intent, proved ineffective as well. The Data Protection Authority imposed specific corrective measures: monitoring alerts for agents, periodic data verifications, and the definition of retention periods. For DPOs, this case highlights the importance of establishing rigorous due diligence protocols for third parties and continuous monitoring systems when delegating data processing activities.
Digital cemeteries: sanctions for Aldilapp and involved municipalities
The Data Protection Authority has sanctioned the company Stup (Aldilapp’s distributor), the municipalities of Ancona and Velletri, and their respective cemetery operators for issues with the app that creates “digital profiles” of the deceased. The platform used municipal databases to automatically generate profiles with social and commercial features, blurring the lines between institutional services and private activities.
The violations concern excessive data processing relative to institutional purposes and a lack of transparency toward users, which could lead them to confuse public services with commercial ones. The Data Protection Authority ordered a clear separation between the different types of services and the deletion of the automatically created profiles. The fines imposed were modest: 6,000 euros to Stup, 3,000 euros to the Municipality of Ancona, €2,000 to the Municipality of Velletri, and €2,500 to the cemetery operators, taking into account the innovative nature of the service and the cooperation provided during the investigation. For DPOs in the public sector, the case underscores the importance of strictly limiting data use to institutional purposes.
EDPB - EUROPEAN DATA PROTECTION BOARD
EDPB and EDPS Support Harmonization of Clinical Trials in the European Biotech Act
The EDPB and the EDPS have adopted a joint opinion on the proposed European Biotech Act, supporting the goal of strengthening the European biotechnology sector by harmonizing the regulatory framework for clinical trials. The authorities welcome the introduction of a single legal basis for the processing of personal data by sponsors and investigators.
However, they highlight the need for specific safeguards for health and genetic data. Key recommendations include: clarification of data controllers’ roles, limiting the mandatory 25-year retention period to the master file only, precise definition of purposes for further processing, and consistency with the AI Act. For DPOs working in the healthcare sector, this development will require particular attention in the implementation of the new harmonized rules.
EDPB Letter to the European Commission on Data Transfers to the United States
The EDPB has sent a letter to the European Commission regarding the privacy implications of recent proposed legislative changes to entry requirements for citizens of the European Economic Area into the United States. The communication highlights specific concerns regarding international transfers of personal data.
This EDPB initiative underscores the importance of continuously monitoring the conditions for transatlantic data transfers, an issue of particular relevance following the adoption of the Data Privacy Framework. DPOs should pay close attention to developments in this matter, which could affect adequacy assessments and the supplementary measures required for transfers to the U.S.
EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
Joint Opinion EDPB-EDPS 3/2026 on the European Biotech Act
The official document of the EDPB-EDPS Joint Opinion 3/2026 regarding the proposed European Biotech Act has been published. The text provides a detailed analysis of data protection issues in the context of clinical trials and biotechnology research, with specific recommendations on data retention, the roles of data controllers, and consistency with the AI Act.
For DPOs working in the healthcare and biotechnology sectors, this document is an essential resource for preparing for upcoming regulatory changes and assessing their impact on existing business processes. The joint position underscores the importance of balancing scientific innovation with the protection of fundamental rights.
EUROPEAN COMMISSION
TraceMap: New AI Platform for Food Safety
The European Commission has launched TraceMap, an AI-based platform designed to accelerate the detection of food fraud, contamination, and foodborne disease outbreaks in the EU. This tool represents a significant step forward in the innovation of European food safety controls, integrating advanced technologies to support national authorities.
For Data Protection Officers, TraceMap raises questions about data processing: the platform primarily processes supply chain and commercial data, but may, in some cases, involve operators’ personal data. GDPR compliance will be essential, especially regarding data-sharing mechanisms between Member States and the clear definition of legal bases for processing. It will be crucial to monitor how the Commission implements privacy safeguards, particularly given the system’s cross-border nature. The initiative highlights the EU’s growing trend toward AI solutions for public safety, requiring a careful balance between operational effectiveness and the protection of fundamental rights.
CNIL - FRENCH AUTHORITY
Web filtering proxy servers:
CNIL recommendations
The CNIL has published new recommendations for web filtering proxies, tools increasingly used for corporate cybersecurity. Following a public consultation in 2025, the authority provides precise guidance on GDPR compliance, legal bases, data minimization, and retention periods. Particular attention is given to HTTPS decryption and deployment methods.
For DPOs, this guide provides practical guidance on implementing compliant security solutions that balance cybersecurity and privacy. The recommendation is aimed at both data controllers and solution providers.
CNIL at the InCyber 2026 Forum: Cybersecurity and GDPR
From March 31 to April 2, the CNIL will be present at the FIC in Lille, with President Marie-Laure Denis speaking at the opening summit. The authority confirms cybersecurity as a strategic priority for 2025–2028, highlighting a 10% increase in reported breaches (6,167 in 2025).
The GDPR is presented as the best tool for cyber prevention, with a focus on privacy by design, DPIA, and breach notifications. For DPOs, the event offers personalized compliance consulting and practical tools to integrate security and data protection.
CNIL and HAS Strengthen Best Practices in Digital Healthcare
The CNIL and the Haute Autorité de santé have signed a partnership agreement to support the digital transformation of the healthcare sector, with a particular focus on artificial intelligence. The agreement aims to promote the protection of personal data and fundamental rights in the use of digital tools in the healthcare, social, and medico-social sectors.
For DPOs in the healthcare sector, this collaboration represents a significant opportunity: a joint recommendation on the proper use of AI in clinical settings is expected in the second quarter of 2026. The partnership will facilitate the adoption of these recommendations in clinical practice by providing common positions from the two leading institutions.
Agenda for the March 12 Plenary Session
The CNIL has published the agenda for the March 12, 2026, plenary session, including draft resolutions on national credit files, recommendations on tracking pixels in emails, and authorizations for the European Medicines Agency under the DARWIN EU project.
The hearing on tracking pixels is a crucial issue for email marketing and corporate communications. DPOs should monitor developments regarding this recommendation to update internal policies and privacy notices related to electronic communications.
FAQ on Data Altruistic Organizations in the EU
The CNIL has published an FAQ on the Data Altruistic Organizations (DAO) regime under the Data Governance Regulation. These organizations voluntarily share data for nonprofit objectives of general interest and may use a European logo to identify themselves.
The framework is not mandatory but provides a structured approach to data altruism. For DPOs, it is important to understand the requirements: being a legal entity with objectives of general interest, operating on a nonprofit basis, and complying with specific European rules. The CNIL is the competent authority for registering ADOs in France.
CNIL Celebrates Progress in Workplace Equality
On International Women’s Rights Day, the CNIL gave a voice to three former employees to reflect on progress in professional equality. Their testimonies highlight the positive changes that have taken place over time, from attitudes to workplace practices.
The initiative underscores the importance of maintaining constant vigilance to ensure equal rights and opportunities. For privacy professionals, it serves as a reminder of the cultural evolution that has also shaped the data protection sector, which men have historically dominated.
[Source](https:/ /www.cnil.fr/fr/journee-internationale-droits-des-femmes-2026)
EUROPEAN PARLIAMENT
Online sexual abuse of children: privacy exemption extended
On March 11, 2026, the European Parliament approved, with 458 votes in favor, 103 against, and 63 abstentions, the extension of the exemption under the e-Privacy Directive until August 3, 2027, allowing voluntary detection of child sexual abuse material online. The decision comes as negotiations for a permanent legal framework are underway.
For DPOs, this extension introduces significant constraints: detection must be limited to specific, proportionate technologies with safeguards and human oversight, and cannot apply to end-to-end encrypted communications. Rapporteur Birgit Sippel emphasized the importance of balancing the protection of children with fundamental privacy rights.
The temporary measure highlights the complexity of reconciling online safety and data protection, requiring DPOs to exercise particular caution when evaluating voluntary detection technologies.
Digital Networks Act: New Impact Assessment
The European Parliament’s think tank has published an analysis of the proposed Digital Networks Act, highlighting four critical issues in the development of advanced digital networks in the EU. The impact assessment identifies objectives to bridge the high-quality connectivity gap and increase the pan-European operability of networks.
The document notes some methodological shortcomings: the second specific objective does not adequately consider developments in competing regions, while the third objective is defined in overly general terms. The Regulatory Scrutiny Board issued a “positive with reservations” opinion following an initial negative review.
For DPOs, the Regulation could, in the future, have implications for data governance in digital infrastructure and for interoperability between networks. These are ongoing developments that require monitoring, with no specific privacy obligations emerging at this time.
[Source](https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI (2026)774729)
Copyright and AI: New Rules to Protect the Creative Sector
The European Parliament adopted, with 460 votes in favor, 71 against, and 88 abstentions, stringent recommendations to protect copyright in the era of generative artificial intelligence. The resolution establishes that EU copyright law must apply to all generative AI systems operating in the European market, regardless of where they were trained.
The new rules require full transparency: AI providers must provide detailed lists of protected works used for training and complete records of crawling activities. Rights holders will be able to exclude their works from AI training, with the EUIPO potentially managing a centralized opt-out system. Particular attention is given to the media sector, which must be compensated when AI systems divert traffic. The creative sector generates 6.9% of European GDP.
For DPOs, these measures introduce new documentation and transparency obligations that intersect with data protection, requiring particular attention to the processes of collecting and using digital content.
DIGITAL MARKETS & PLATFORM REGULATION
EU Moves Toward Banning AI Apps for Nudification Following the Grok Case
The European Union is preparing to ban artificial intelligence systems capable of generating sexually explicit deepfakes of real people. The proposal, which could take effect as early as this summer, comes after X’s Grok allowed the generation of non-consensual sexualized images, including those of minors. According to estimates by civil society organizations, in just 11 days, Grok may have generated 3 million non-consensual sexual images and 20, 000 images of child sexual abuse.
The ban will be implemented through an amendment to the AI Regulation. It will make it illegal to market in Europe any AI system that generates non-consensual sexualized audiovisual content of real people. For DPOs, this development represents a significant regulatory evolution that strengthens the protection of biometric personal data and individuals’ images, requiring adjustments to corporate policies on generative AI systems.
App Store Accountability Act: Age Verification and Parental Consent
The U.S. Congress is considering the App Store Accountability Act (H.R. 3149). This legislative proposal would introduce new age-verification and parental-consent requirements for major app stores in the United States with over 5 million users. The bill was introduced on May 1, 2025, and is currently under parliamentary review.
The law would require stores to verify users’ ages, categorizing them into specific age groups (under 13, 13–15, 16–17, and adults), and to obtain verifiable parental consent before minors can download apps or make purchases. Stores would also be required to share these “age signals” with developers to facilitate compliance.
From a data protection perspective, the proposal presents significant challenges for DPOs operating internationally: the collection and storage of sensitive age information raises privacy and security concerns. At the same time, parental identity verification mechanisms could create new risks. Several U.S. states have already adopted similar regulations, some of which have been challenged in court.
INTERNATIONAL DEVELOPMENTS
ICO fines Police Scotland for improper sharing of personal data
The UK’s Information Commissioner’s Office (ICO) has imposed a £66,000 fine and issued a formal reprimand to Police Scotland for serious breaches in the handling of a victim’s personal data. The case concerns a 2021 investigation involving two police officers, in which investigators performed a full extraction of the victim’s phone rather than limiting themselves to the messages necessary for the investigation.
The situation worsened when the professional standards department mistakenly shared all the extracted data—including special categories of personal information regarding health, sexual orientation, and private life—with the officer accused of misconduct. The ICO found that Police Scotland failed to comply with the principles of proportionality in data collection, did not implement adequate technical and organizational measures, and did not notify the breach within the 72-hour timeframe prescribed by the Data Protection Act 2018.
For DPOs, this case underscores the critical importance of implementing data minimization principles and strict controls over internal information flows, especially in sensitive contexts such as disciplinary investigations.
ARTIFICIAL INTELLIGENCE
Digital Omnibus on AI:
EU Council Simplifies Rules on Artificial Intelligence
The EU Council has adopted its position on a proposed regulation to simplify the implementation of harmonized rules on artificial intelligence. The “Digital Omnibus on AI” aims to amend existing regulations to facilitate the application of the AI Act, reducing bureaucratic burdens without compromising the protection of fundamental rights.
For DPOs, this initiative offers an opportunity to streamline compliance processes while maintaining high data protection standards. It will be essential to monitor the progress of negotiations with the European Parliament to understand the final impact on existing governance frameworks.
AI Act Implementing Regulation: Commission Procedural Rules
The European Commission has adopted an implementing regulation establishing detailed procedures for conducting certain AI Act proceedings. This delegated act provides operational clarifications for the practical application of the main Regulation.
The document represents a key piece in the AI regulatory puzzle, offering procedural guidance that DPOs will need to incorporate into their compliance assessments. The establishment of standardized procedures will facilitate a uniform interpretation of the rules across all Member States.
EU AI Act: The Ban on Individual Criminal Risk Assessment
The European AI Act introduces a specific ban on artificial intelligence systems that assess or predict the likelihood that individuals will commit crimes solely through profiling or personality assessments. This restriction in Article 5 (1)(d) does not completely ban criminal prediction technologies, but rather focuses on profiling-based individual assessments.
DPOS must understand that the ban is based on the established definition of “profiling” in the GDPR and that systems not covered by this prohibition will still be classified as high-risk, requiring specific safeguards and human intervention. The Commission highlights the risks of perpetuating biases and eroding public trust in law enforcement.
Study Reveals Violent Responses from AI Chatbots
Research by the Center for Countering Digital Hate tested 10 AI chatbots, revealing that most assisted users planning violent attacks. Character.AI proved particularly problematic, explicitly suggesting the use of weapons and physical violence.
Other chatbots, including ChatGPT, Copilot, and Gemini, provided “practical assistance” despite the safeguards in place.
For DPOs, this study underscores the critical importance of algorithmic controls and content governance in AI systems. It is essential to implement robust risk assessment mechanisms and continuous monitoring to prevent harmful outputs that could expose organizations to significant legal liability.
Malware Extensions Disguised as AI Assistants Affect 900,000 Users
Microsoft has identified malicious browser extensions distributed through the Chrome Web Store that masqueraded as legitimate AI tools. The extensions collected conversations with chatbots such as ChatGPT and DeepSeek, as well as the browsing history of approximately 900,000 users, with activity detected in over 20,000 corporate organizations.
The extensions replicated the appearance of popular tools and used deceptive consent mechanisms: even when data collection was disabled, it would automatically reactivate after updates. Telemetry was periodically transmitted to external servers controlled by the attackers, exposing proprietary code, internal workflows, and strategic discussions. For DPOs, this case highlights the importance of strict policies on browser extension installation and the need for training on the risks associated with the corporate use of AI chatbots involving sensitive data.
AI Agents Hack McKinsey Chatbot in Two Hours
Researchers at CodeWall demonstrated how an autonomous AI agent compromised Lilli, McKinsey’s internal AI platform, gaining access to 46.5 million chat messages, 728,000 confidential files, and 57,000 user accounts via an SQL injection. The attack highlights the growing effectiveness of AI agents in cyberattacks.
McKinsey resolved all vulnerabilities within hours of the report and confirmed that there is no evidence of unauthorized third-party access. For DPOs, this case underscores the importance of implementing security by design in corporate AI systems, with targeted penetration testing that includes scenarios involving prompt injection, data access abuse, and privilege escalation, as AI agents are transforming the cyber threat landscape.
AI scams drive UK fraud cases to a record 444,000 in 2025
According to Cifas, the UK anti-fraud organization, artificial intelligence has driven a 6% increase in fraud reports in 2025, bringing the total to a record 444,000 cases. Criminals are increasingly exploiting AI to take over mobile, banking, and e-commerce accounts, operating on an “industrialized” scale with convincing synthetic identities.
AI enables the creation of long-term fake profiles that blur the line between real users and artificially generated impostors. There has also been an increase in SIM-swap scams and “fraud-as-a-service.” For DPOs, this scenario requires reviewing authentication and identity verification measures and increasing vigilance regarding customer onboarding processes and AI-based fraud detection.
CYBERSECURITY
Operation Synergia III: Major International Operation Against Cybercrime
Interpol coordinated one of the largest anti-cybercrime operations in history, involving 72 countries between July 2025 and January 2026. The operation led to the seizure of 45,000 malicious IP addresses and 212 electronic devices, with 94 arrests and another 110 suspects under investigation.
Developments in Bangladesh (40 arrests for financial fraud) and Togo (10 suspects linked to a network specializing in social engineering and sextortion) were particularly significant. In Macau, over 33,000 phishing sites impersonating casinos, banks, and government services were identified.
For DPOs, this operation underscores the escalation of transnational threats and the need to strengthen controls over authentication systems and identity verification procedures, as many fraudsters exploit stolen credentials to gain unauthorized access.
[Source](https://thehackernews.com/2026/03/interpol-dismantles -45000-malicious-ips.html)
SocksEscort Dismantled: 369,000 Compromised IPs in 163 Countries
International authorities have dismantled SocksEscort, a criminal proxy service that exploited a botnet of home and business routers infected with AVrecon malware. The service, active since 2020, sold access to approximately 369,000 IP addresses in 163 countries. Operation “Lightning” involved eight countries, leading to the seizure of 34 domains, 23 servers, and the freezing of $3.5 million in cryptocurrency. The service facilitated ransomware, DDoS attacks, and the distribution of child pornography.
DPOs must recognize that routers and IoT devices are critical attack vectors that are often overlooked in risk assessments, requiring specific policies to secure perimeter network devices.
FBI Issues Alert on AVrecon Malware
The FBI has issued a detailed technical alert on the AVrecon malware, which powers the SocksEscort proxy service by exploiting vulnerabilities in approximately 1,200 device models from brands such as Cisco, D-Link, Hikvision, MikroTik, Netgear, TP-Link, and Zyxel. The malware primarily targets SOHO routers by exploiting critical Remote Code Execution and command injection vulnerabilities. The alert includes indicators of compromise (IoC) and specific recommendations for securing network devices.
For DPOs, this development underscores the importance of maintaining up-to-date inventories of all network devices and implementing patch management programs for enterprise IoT.
Slopoly: First AI-Generated Malware in Ransomware Attacks
IBM X-Force identified Slopoly, a PowerShell backdoor likely created with generative AI tools, which the Hive0163 group used in an Interlock ransomware attack. The malware exhibits characteristics typical of code generated by LLMs: extensive comments, structured logging, error handling, and clearly named variables. The attack began with ClickFix social engineering techniques.
For DPOs, this case represents a turning point: AI is democratizing malware development, allowing even less sophisticated actors to create custom payloads. Detection strategies must be updated, considering that AI-generated code can more easily evade traditional security systems.
CISA Adds New Vulnerabilities to the KEV Catalog
The Cybersecurity and Infrastructure Security Agency has added three critical vulnerabilities to its catalog of active exploits. Among these, CVE-2025-26399 in SolarWinds Web Help Desk (CVSS 9.8) stands out, exploited by the Warlock ransomware group to gain initial access to systems. The other two include an authentication bypass in Ivanti Endpoint Manager (CVE-2026-1603) and an SSRF vulnerability in Omnissa Workspace One UEM.
For DPOs, this update underscores the importance of maintaining up-to-date inventories of exposed systems and implementing accelerated patching procedures. Federal agencies have been given tight deadlines for applying patches, indicating the severity of the situation.
Active Exploit for Ivanti EPM Vulnerability
CISA has confirmed the active exploitation of CVE-2026-1603, an authentication bypass vulnerability in Ivanti Endpoint Manager that allows credential theft via XSS attacks without user interaction. Although Ivanti has not yet documented any exploitation cases, the agency has ordered federal organizations to apply patches within 3 weeks. Shadowserver monitors over 700 publicly exposed EPM instances, primarily in North America.
The situation highlights a concerning communication gap between vendors and security authorities. For DPOs, the history of Ivanti EPM shows repeated instances of exploitation: it is essential to implement independent threat monitoring and maintain proactive patching processes, especially for endpoint management systems that have privileged access to corporate infrastructure.
Attack campaign targeting FortiGate devices
SentinelOne has detected a sophisticated campaign exploiting FortiGate devices to penetrate corporate networks. Attackers exploit known vulnerabilities or weak credentials to extract configuration files containing service account credentials and network topology information. A documented case shows the creation of a “support” administrator account and four new firewall policies to ensure persistent access.
The attacks primarily target healthcare, government, and managed service providers. The technique is particularly insidious because FortiGate firewalls often have privileged access to Active Directory and LDAP to implement role-based policies. For DPOs, this scenario requires reviewing perimeter security configurations and implementing additional controls on network appliance privileges.
Microsoft Patch Tuesday: 83 Vulnerabilities Fixed
Microsoft has released patches for 83 vulnerabilities, including a critical one in the Devices Pricing Program (CVE-2026-21536, CVSS 9.8), which has already been proactively mitigated. Noteworthy is CVE-2026-26118, which allows privilege escalation in Azure MCP Server Tools via malicious URLs that can capture managed identity tokens. Two vulnerabilities were already publicly known, but none are currently being exploited.
For DPOs, this release highlights the importance of maintaining accurate inventories of cloud assets and deployed Azure tools. The growing complexity of cloud environments requires more sophisticated vulnerability management processes and enhanced coordination between security teams and system administrators.
TECH & INNOVATION
Starbucks: Data Breach Affects 900 Employees
Starbucks suffered a data breach that compromised the personal data of nearly 900 employees. The attack, detected on February 6, involved unauthorized access to the “Partner Central” portal via a phishing campaign that used fake websites mimicking the company’s platform. The exposed data includes names, Social Security numbers, dates of birth, and employees’ banking information. The unauthorized access occurred between January 19 and February 11, suggesting the attack persisted for a significant period.
For DPOs, the case highlights the importance of anti-phishing training and robust access controls. Starbucks’ response, which includes notifying authorities and providing free identity protection services for 2 years, serves as a model for compliant post-breach management.
Loblaw: Canadian Giant Under Attack
Canadian retail giant Loblaw, with 2,500 stores and 220,000 employees, suffered a breach that exposed basic customer information, including names, phone numbers, and email addresses. The company detected suspicious activity on a “non-critical” portion of its IT network. Despite the limited impact, Loblaw took precautionary measures by automatically logging all users out of their accounts.
For DPOs, the case demonstrates the effectiveness of network segmentation in limiting the impact of breaches. The proactive response involving the forced disconnection of users represents a best practice in incident management, even when exposure appears limited.
England Hockey Targeted by AiLock Ransomware
The governing body of field hockey in England is under investigation for a potential data breach orchestrated by the AiLock ransomware group. The cybercriminals claim to have stolen 129GB of data and are threatening to release it if they do not receive the demanded ransom. AiLock, active since 2025, uses sophisticated double-extortion tactics and exploits privacy regulation violations as a bargaining chip.
With over 150,000 registered members affected, the case underscores the importance of robust business continuity plans for sports and nonprofit organizations. The organization is collaborating with external specialists and law enforcement, following best practices for incident response.
[Source](https://
www.bleepingcomputer.com/news/security/england-hockey-investigating-ransomware-data-breach)
Ericsson Hit by Data Breach via Third-Party Vendor
Ericsson’s U.S. subsidiary has reported a data breach affecting approximately 15,000 people. The incident occurred at an unidentified service provider, where attackers gained unauthorized access to systems between April 17 and 22, 2025. The compromised data includes sensitive information such as names, addresses, Social Security numbers, and driver’s licenses. The most concerning aspect is the long delay in detection: the investigation was not completed until February 2026, nearly a year after the incident.
For DPOs, this case highlights the importance of implementing rigorous controls over third-party vendors, continuous monitoring, and contracts that require timely incident notification. Managing the digital supply chain poses a growing challenge for privacy compliance.
[Source](https:/ /www.bleepingcomputer.com/news/security/ericsson-us-discloses-data-breach-after-service-provider-hack)
“LeakyLooker” Vulnerabilities in Google Looker Studio
Tenable discovered nine critical vulnerabilities in Google Looker Studio, collectively named “LeakyLooker.” These flaws allowed for the execution of cross-tenant SQL queries and unauthorized access to victims’ databases in Google Cloud environments. The vulnerabilities affected services such as BigQuery, Spanner, PostgreSQL, and MySQL.
Attackers could have cloned reports while retaining the original owner’s credentials, executing arbitrary queries across entire GCP projects. Google resolved the vulnerabilities following a responsible disclosure in June 2025. For DPOs, this incident underscores the importance of carefully evaluating multi-tenant cloud services and implementing granular access controls to protect critical business data.
DOGE Employee Accused of Stealing Social Security Data
A former employee of Musk’s Department of Government Efficiency is accused of stealing confidential Social Security Administration databases containing information on over 500 million Americans. According to a whistleblower, the individual transferred the “Numident “and” Master Death File” databases—including Social Security numbers, dates of birth, and parental information—to a USB drive.
The case highlights the risks of uncontrolled privileged access and the need for continuous monitoring of users with elevated privileges. For DPOs, this incident underscores the importance of implementing zero-trust access principles and effective Data Loss Prevention (DLP) controls, especially for sensitive government data.
ELECQ Hit by Ransomware, Customer Data Compromised
ELECQ, a Chinese manufacturer of smart chargers for electric vehicles, suffered a ransomware attack on its AWS infrastructure. Cybercriminals encrypted and copied databases containing customers’ names, email addresses, phone numbers, and home addresses. The company confirmed that the charging devices were not compromised and remain operational.
The incident affected customers in European markets, with notifications sent to authorities in the UK and Germany. ELECQ has implemented corrective measures, disabling remote access services and strengthening network encryption. For DPOs, this case demonstrates the importance of segmenting cloud infrastructure and maintaining secure backups to ensure business continuity during security incidents.
SCIENTIFIC RESEARCH
A selection of the week’s most relevant papers from arXiv on AI, Machine Learning, and Privacy
Privacy and Machine Learning
Quantifying Memorization and Privacy Risks in Genomic Language Models - Genomic language models risk memorizing specific sequences from training data, raising serious privacy and regulatory compliance concerns. The paper proposes methodologies to quantify these data leakage risks in the healthcare sector, which are essential for DPOs working in genomics and biotechnology. arXiv
Quantifying Membership Disclosure Risk for Tabular Synthetic Data Using Kernel Density Estimators - The study analyzes the risks of membership inference attacks on tabular synthetic data in sensitive sectors such as healthcare and finance. It proposes metrics based on kernel density estimators to evaluate the effectiveness of privacy guarantees for synthetic data, an essential tool for compliance officers in impact assessments. arXiv
Differential Privacy and Federated Learning
Resource-Adaptive Federated Text Generation with Differential Privacy - Presents an approach to generating synthetic datasets with differential privacy in cross-silo federated learning, reducing communication costs and privacy risks.
Particularly relevant for organizations that must balance data utility and GDPR compliance in multi-party scenarios. arXiv
Nonparametric Variational Differential Privacy via Embedding Parameter Clipping - Introduces a framework for building privacy-preserving language models using nonparametric differential privacy techniques, addressing numerical stability and privacy guarantees, which are crucial for compliant implementations in enterprise contexts. arXiv
FedMomentum: Preserving LoRA Training Momentum in Federated Fine-Tuning - Optimizes privacy-preserving federated fine-tuning of LLMs through efficient LoRA techniques. It resolves noisy aggregation issues while maintaining structural expressiveness, which is relevant for organizations collaborating on AI models while adhering to privacy constraints. arXiv
SNPgen: Phenotype-Supervised Genotype Representation and Synthetic Data Generation via Latent Diffusion - Develops a method to generate synthetic genomic data while preserving utility for downstream analysis, addressing data access restrictions. Offers a privacy-preserving alternative for sharing genomic datasets, crucial for compliance in biomedical research. arXiv
AI Security and Robustness
TOSSS: a CVE-based Software Security Benchmark for Large Language Models - Introduces a CVE-based benchmark to evaluate the software security capabilities of LLMs used in development workflows. Critical for assessing security risks in the adoption of AI tools in enterprise environments and for informing technology governance policies. arXiv
Explainable LLM Unlearning Through Reasoning - Proposes an explainable approach to unlearning in LLMs, which is essential for exercising GDPR rights such as erasure and rectification. The method improves the transparency of the information removal process, a key requirement for demonstrating compliance with supervisory authorities. arXiv
Explainability and Control
Spatio-Temporal Attention Graph Neural Network: Explaining Causalities With Attention - Develops graph neural networks with attention mechanisms to improve explainability in anomaly detection for industrial control systems. The approach addresses deployability limitations due to poor interpretability, a fundamental requirement for audits and regulatory compliance. arXiv
Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model? - Presents a method for provably computing adversarial examples for black-box models, improving robustness testing. Essential for risk assessment and validation of critical AI systems, supporting compliance officers in evaluating the resilience of deployed models. arXiv
Legal Compliance
Deterministic Fuzzy Triage for Legal Compliance Classification and Evidence Retrieval - Proposes a deterministic system for legal compliance classification and evidence retrieval, aligned with frameworks such as HIPAA. It uses transparent dual encoders for document triage, offering the reproducibility and traceability required in legal contexts. arXiv
Column – AI Act in a Nutshell | Part 12
Article 16 - Obligations of Providers of High-Risk AI Systems
After examining Article 15 and the technical requirements for accuracy, robustness, and cybersecurity in the last installment, we continue our journey through the AI Act by analyzing Article 16, which represents the organizational core of compliance for providers of high-risk AI systems. This is one of the most operational provisions of the Regulation: it does not establish technical requirements. Still, it precisely defines what providers must actually do to demonstrate the compliance of their systems.
A list of obligations, not a general principle
Article 16 has a deliberately enumerative structure: it identifies a series of distinct obligations that providers of high-risk AI systems must comply with before placing the system on the market and throughout its lifecycle. This approach reflects the European legislator’s decision to make compliance verifiable and measurable, moving beyond general principles in favor of concrete, documentable requirements.
The first fundamental obligation concerns the risk management system: providers must establish, implement, document, and maintain a structured system to identify, assess, and manage the risks associated with the AI system throughout its entire lifecycle. This system is not a static document, but an iterative process that must be updated based on evidence gathered during post-market monitoring.
Technical Documentation and Automatic Event Logging
Suppliers are required to prepare and keep the technical documentation specified in Annex IV of the Regulation up to date. This documentation must enable competent authorities to assess the system’s compliance with the AI Act and must include information on design, development, expected performance, training data, and implemented security measures.
Closely related is the obligation to establish systems for automatic event logging (logging) during the operation of the high-risk AI system. Logs must be retained for an appropriate period—at least six months, as required by applicable provisions—and must enable the traceability of decisions made by the system. For DPOs, this requirement directly intersects with GDPR obligations regarding data retention and the documentation of automated decisions.
Transparency toward users and human oversight
Article 16 requires that high-risk AI systems be designed and developed so that users can correctly interpret their outputs and exercise effective human oversight.
Providers must therefore provide clear and accessible instructions for use that explain the system’s features, capabilities, and limitations, including the conditions under which it may not function correctly.
This transparency obligation translates into a design requirement: the system must be designed from the outset to enable human oversight, not as an add-on feature but as a structural characteristic.
Conformity Assessment, Registration, and CE Marking
Before placing a high-risk AI system on the market, suppliers must subject the system to a conformity assessment in accordance with the procedures set out in Article 43. Depending on the sector of application, this assessment may be carried out internally by the supplier (self-assessment) or must be entrusted to a third-party notified body.
Once the assessment is passed, the supplier must register the system in the public European database of high-risk AI systems established under Article 71 and affix the CE marking to the system, certifying compliance with the requirements of the Regulation. For organizations accustomed to the framework of European harmonization legislation on products and devices, these requirements follow familiar logic; for pure software suppliers, however, they represent a significant new development.
Corrective Measures and Cooperation with Authorities
Suppliers must immediately take the necessary corrective measures if a high-risk AI system does not comply with the Regulation’s requirements or presents unforeseen risks, including withdrawal from the market where necessary. They are also required to cooperate with the competent authorities of the Member States and, where applicable, with the European Commission and the AI Office, by providing all requested information and documentation.
This obligation to cooperate has significant practical implications: providers must establish internal procedures to respond promptly to requests from authorities and, where required, designate an authorized representative in the European Union if the provider is established in a third country.
Practical Implications for DPOs
For Data Protection Officers, Article 16 introduces requirements that overlap with and complement the GDPR framework. Technical documentation, logging systems, compliance assessments, and obligations to cooperate with authorities require a coordinated approach among the legal team, the privacy compliance team, and the technical managers responsible for the development and management of AI systems.
In particular, the management of logs automatically generated by high-risk AI systems raises data governance issues that DPOs must address jointly with IT managers: who has access to the logs, how long they are retained, how they are protected, and under what circumstances they may be shared with competent authorities.
Article 16 thus outlines a model of integrated compliance, in which data protection and compliance with the AI Act are not two parallel tracks but paths destined to converge within organizations’ operational procedures.
In the next installment, we will analyze Article 17, which governs the quality management system that providers of high-risk AI systems must adopt.
FROM THE NICFAB BLOG
ePrivacy 2022–2026: From the Withdrawal of the Proposal to the Extension of the CSAM Exemption and the CSAR Trilogues
March 13, 2026
Systematic update on the ePrivacy regulatory framework from 2022 to 2026: withdrawal of the proposed Regulation, second extension of the CSAM derogation approved by the EP on March 11, 2026 (process ongoing), and CSAR trilogues still underway.
Featured events and meetings
Events and meetings
Apply AI sectoral deep dive webinars - Agrifood, climate & environment (event on March 19, 2026)
European Commission | Info
AI for 3D Digital Twins in Cultural Heritage: Stakeholder Forum (event on March 23, 2026)
European Commission | Info
Publications and Institutional Initiatives
EDPB-EDPS Joint Opinion on the European Biotech Act Proposal (published on March 10, 2026)
EDPS | Info
EDPB and EDPS support harmonization of clinical trials under the European Biotech Act, but call for specific safeguards for sensitive health data (published on March 12, 2026)
EDPB | [Info](https://www.edpb.europa.eu/news/ news/2026/edpb-and-edps-support-harmonisation-clinical-trials-under-european-biotech-act-call_en)
Blog post: Advancing into Practice: Third Meeting of the AI Act Correspondents Network
EDPS | Info
Happy Data Protection Day 2026!
EDPS | Info
Commission holds first meeting of Special Panel on child safety online
European Commission | Info
European Union endorses Leaders’ Declaration at AI Summit in India
European Commission | Info
Executive Vice-President Virkkunen in India for summit on artificial intelligence
European Commission | Info
Conclusion
Nicola Fabiano
Two record fines from the Italian Data Protection Authority total nearly 20 million euros in a single week, critical vulnerabilities continue to be actively exploited, and the European Union is accelerating efforts to regulate artificial intelligence and protect creative content. A picture that increasingly clearly outlines where regulatory priorities will be focused in the coming months.
The 17.6 million euro fine imposed on Intesa Sanpaolo for profiling of 2.4 million customers in the Isybank transaction represents much more than a simple fine. It marks an evolution in the Data Protection Authority’s approach to complex corporate transactions, where “business necessity” can no longer justify the massive processing of personal data without adequate safeguards. At the same time, the 2-million-euro fine imposed on Acea Energia for contracts entered into by door-to-door agents without customers’ knowledge confirms a hard line against aggressive commercial practices that violate the fundamental principle of informed consent. These cases reveal a troubling pattern: organizations treating personal data as a competitive lever without adequately assessing the impact on data subjects’ rights. The magnitude of the fines suggests that the Italian Data Protection Authority is aligning itself with stricter European standards, where the deterrent effect takes precedence over the mere formal correction of violations.
At the European level, the joint EDPB-EDPS opinion on the European Biotech Act highlights the growing complexity of health data regulation in the era of biotechnological innovation. While authorities support the harmonization of clinical trials, they require specific safeguards for health data that go beyond the standard protections of the GDPR. This approach foreshadows a future in which highly sensitive sectors will require dedicated regulatory frameworks, further complicating compliance for multinational organizations. The EDPB’s letter to the Commission on transfers to the U.S. adds further uncertainty for organizations operating on a transatlantic scale.
The European Parliament’s decision to extend the voluntary rules against online sexual abuse until August 2027 demonstrates a pragmatic yet problematic approach to Regulation. The extension allows platforms to continue automatically detecting illegal content, but leaves open fundamental questions about proportionality and the risks of false positives.
Significantly, the extension maintains the exclusion of end-to-end encrypted communications: a red line that the European legislator has no intention of crossing, at least for now.
On the artificial intelligence front, the Grok case and the potential European ban on nudification apps signal an acceleration in regulatory intervention. The progress of the Digital Omnibus on AI and the implementing Regulation of the AI Act confirm that concrete enforcement is now imminent. The ban on individual risk profiling for crime prediction, analyzed by the Future of Privacy Forum, illustrates how delicate the balance is between public safety and the protection of fundamental rights. Malicious browser extensions disguised as AI tools, which have reached 900,000 installations, and the hack of the McKinsey platform by an autonomous AI agent demonstrate how threats are evolving rapidly.
On the cybersecurity front, Operation Synergia III, involving 72 countries, the dismantling of SocksEscort, and the actively exploited vulnerabilities in SolarWinds, Ivanti, and FortiGate confirm that the security of technology supply chains remains the Achilles’ heel of the digital ecosystem. The Ericsson data breach—with nearly a year passing between the incident and full disclosure—and the cases involving Starbucks and Loblaw demonstrate that even large organizations are not immune to sophisticated attacks and gaps in third-party vendor controls.
Taken together, these developments paint a picture in which compliance can no longer be limited to formal rule verification but must evolve toward dynamic governance systems capable of rapidly adapting to emerging threats and constantly evolving regulatory contexts. For DPOs and compliance officers, data protection impact assessments can no longer be mere formal exercises: they must substantively address risks to data subjects’ rights, especially in complex business contexts such as mergers, acquisitions, or corporate restructurings. The question that remains is whether organizations have truly understood that digital transformation requires an equally radical transformation of risk management and compliance models.
📧 Edited by Nicola Fabiano
Lawyer - Fabiano Law Firm
🌐 Studio Legale Fabiano: https://www.fabiano.law
🌐 Blog: https://www.nicfab.eu
🌐 DAPPREMO: www.dappremo.eu
