NicFab Newsletter
Issue 9 | February 24, 2026
Privacy, Data Protection, AI, and Cybersecurity
Welcome to issue 9 of the weekly newsletter dedicated to privacy, data protection, artificial intelligence, cybersecurity, and ethics. Every Tuesday, you will find a carefully selected list of the most relevant news from the previous week, with a focus on European regulatory developments, case law, enforcement, and technological innovation.
In this issue
- ITALIAN DATA PROTECTION AUTHORITY
- EDPB - EUROPEAN DATA PROTECTION BOARD
- EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
- EUROPEAN COMMISSION
- CNIL - FRENCH AUTHORITY
- EUROPEAN PARLIAMENT
- COUNCIL OF THE EUROPEAN UNION
- DIGITAL MARKETS & PLATFORM REGULATION
- INTERNATIONAL DEVELOPMENTS
- ARTIFICIAL INTELLIGENCE
- CYBERSECURITY
- TECH & INNOVATION
- SCIENTIFIC RESEARCH
- AI Act in a Nutshell - Part 9
- Events and meetings reported
- Conclusion
ITALIAN DATA PROTECTION AUTHORITY
Sanction against eCampus for facial recognition: a warning for the education sector
The Data Protection Authority imposed a €50,000 fine on eCampus University for the unlawful use of facial recognition systems during online teaching qualification courses. The violation affected more than 450 students per lesson, highlighting the extent of the impact on the rights of those concerned.
The Authority highlighted two fundamental issues: the lack of an appropriate legal basis for processing biometric data and the failure to conduct the mandatory DPIA. Of particular significance is the consideration of the availability of less invasive alternative tools for verifying student attendance.
For DPOs in the education sector, this case serves as a clear warning: biometric data requires enhanced safeguards and a rigorous assessment of proportionality, always favoring less intrusive solutions when available.
Health screening: new guidelines for the use of telephone numbers
The Data Protection Authority has adopted specific guidelines that allow healthcare companies to use patients’ telephone numbers, collected for previous services, to promote public screening campaigns. The decision is based on the principle of compatibility between the original purposes of treatment and public health prevention.
The safeguards imposed are stringent: updating the privacy policy, limiting campaigns to those required by law, excluding data collected in sensitive contexts (abortion, anonymous births, services for HIV-positive individuals), clear identification of the sender, and immediate opt-out procedures.
For healthcare DPOs, these guidelines provide a solid basis for effectively balancing the public interest in prevention with the protection of data subjects’ rights, providing a clear operational framework for practical implementation.
# Body cameras for the Pescara local police: negative opinion from the Data Protection Authority
The Data Protection Authority has issued a negative opinion on the data protection impact assessment (DPIA) submitted by the Municipality of Pescara for the use of body cameras by local police officers in the context of auxiliary public security and judicial police activities. Numerous critical issues were found in the system despite the Authority’s guidance during several discussions.
The main issues concern the security of the technological solutions: the data management IT system is provided by a US company, and the Municipality has not clarified whether alternatives available on the market have been evaluated. The Data Protection Authority also found that there were no security measures in place to prevent the supplier from accessing the processed data in clear text, thereby posing a risk of transfer to third countries in violation of EU Directive 2016/680. Another critical issue is the presence of a SIM card inside the body camera, about which no clarification has been provided.
For local authority DPOs, this case highlights the crucial importance of conducting thorough assessments of technology suppliers, especially when high-risk processing involves law enforcement data and potential transfers outside the EU.
AGID guidelines on service accessibility: favorable opinion from the Data Protection Authority
The Data Protection Authority has expressed a favorable opinion on the AGID draft decree defining the guidelines on the accessibility of public and private services, such as passenger transport, the internet, and payment systems. The aim is to offer services and information that are also accessible to people with disabilities, without discrimination.
The draft incorporates the Data Protection Authority’s recommendations to ensure compliance with privacy regulations. Based on the principles of privacy by design and privacy by default of the GDPR, service providers dedicated to people with disabilities will be required to take appropriate measures to prevent the tracking of the tools, solutions, and settings that help them access digital services, and must expressly declare that they do not use web tracking techniques from which it is possible to infer any disability conditions of the user.
For DPOs, these guidelines represent a virtuous model of integration between accessibility and data protection, demonstrating how the principles of the GDPR can protect particularly vulnerable categories without hindering the provision of essential services.
EDPB - EUROPEAN DATA PROTECTION BOARD
EDPB identifies challenges in implementing the right to erasure
The EDPB has published an important report on the results of coordinated action 2025 on the right to be forgotten (Art. 17 GDPR), involving 32 data protection authorities and 764 data controllers across Europe. The initiative revealed seven recurring challenges that hinder the full implementation of this fundamental right.
Among the most significant issues are the lack of adequate internal procedures to handle requests, the use of inefficient anonymization techniques as an alternative to erasure, and difficulties in determining retention periods and managing backups. Particularly critical is the complexity of balancing the right to erasure with other fundamental rights and freedoms.
For DPOs, this report offers a valuable roadmap for improving internal processes and ensuring compliance. The EDPB will develop further guidance at the European level, drawing on best practices already available at the national level.
Full report on coordinated action on the right to erasure
The full document on coordinated action CEF 2025 on the right to erasure is now available, including annexes with national reports from individual data protection authorities. The document provides an in-depth analysis of the implementation practices of the right to be forgotten in Member States.
The report highlights that 9 authorities have launched formal investigations, while 23 have conducted fact-finding activities. The aggregated results show significant disparities in the practical application of Article 17 GDPR, with a particular focus on the difficulties SMEs face in implementing effective procedures.
For privacy professionals, the document is an essential resource for understanding the authorities’ expectations and identifying areas for improvement in their deletion request management processes.
Report on the stakeholder event on anonymization and pseudonymization
The EDPB has published the report of the stakeholder event held on December 12, 2025, dedicated to anonymization and pseudonymization techniques. The event brought together industry experts, representatives of supervisory authorities, and civil society organizations to discuss the technical and legal challenges of these privacy tools.
Discussions focused on the evolution of anonymization technologies amid increasingly sophisticated re-identification tools, raising questions about the robustness of the techniques currently in use. Particular attention was paid to standards for assessing the effectiveness of anonymization measures.
For DPOs, this document provides valuable insights into the EDPB’s future expectations for implementing these techniques, suggesting a review of current organizational practices in light of new technological challenges.
EDPB response on spyware abuse in the EU
The EDPB has responded to the civil society open letter on recent cases of spyware abuse in the European Union. The response addresses concerns raised about the misuse of surveillance software by state authorities and the implications for fundamental rights to privacy and data protection.
The document clarifies the role of data protection authorities in monitoring and enforcement when the processing of personal data by spyware does not fall within the national security exemptions. The EDPB reiterates the importance of robust procedural safeguards and the necessary proportionality in the use of such tools.
For professionals in the sector, this response highlights the evolution of the scope of application of the GDPR in security contexts and the importance of carefully assessing the legal bases for sensitive processing.
[Source](https://www.edpb.europa.eu/ our-work-tools/our-documents/letters/reply-civil-society-open-letter-response-recent-spyware-abuse_en)
EDPS - EUROPEAN DATA PROTECTION SUPERVISOR
Data takes flight: Navigating privacy at the airport
On February 12, 2026, the EDPS and EDPB organized a conference dedicated to the protection of personal data in air transport. The event explored the complex dynamics between security and privacy at airports, where passengers are subject to multiple forms of data collection and processing.
For DPOs working in the transport sector or collaborating with airlines, this initiative underscores the need to balance national security needs with fundamental rights. The event provides practical materials to understand better data flows at airports and to develop more effective compliance strategies.
Blog post: Advancing into Practice: Third Meeting of the AI Act Correspondents Network
The EDPS has published a blog post on the third meeting of the AI Act Correspondents Network, signaling the transition from the theoretical to the practical phase of implementation. These meetings represent a crucial moment in defining the operational modalities for supervising artificial intelligence systems.
For DPOs, this evolution means they need to prepare concretely for the application of the AI Act. The correspondents’ network is developing practical guidelines that will be essential for corporate compliance, especially for organizations that use or develop high-risk AI systems.
Extension of interim rules to combat child sexual abuse online must address shortcomings and prevent indiscriminate scanning
The EDPS has expressed concerns about the extension of interim rules to combat child sexual abuse online, emphasizing the need to avoid indiscriminate scanning. The position highlights the delicate balance between child protection and fundamental digital rights.
This position is particularly relevant for DPOs of digital platforms and electronic communications service providers. Future rules will need to incorporate stronger safeguards to prevent blanket monitoring, requiring more targeted and privacy-friendly approaches to detecting illegal content.
EUROPEAN COMMISSION
Investigation into Shein under the Digital Services Act
The European Commission has launched formal proceedings against Shein, the Chinese e-commerce platform, under the Digital Services Act. This action represents a significant escalation in the enforcement of the new European digital services legislation, which aims to ensure a safer, more transparent online environment.
The investigation is reportedly focused on the platform’s content moderation practices and recommendation systems. For DPOs, this case highlights the importance of aligning compliance strategies not only with the GDPR but also with the growing responsibilities arising from the DSA, especially for platforms operating in the European market.
This precedent highlights how the EU is stepping up enforcement of its digital regulations, requiring greater attention to data governance and operational transparency.
Towards European Open Digital Ecosystems
The Commission has published a strategic communication entitled “Towards European Open Digital Ecosystems,” outlining the EU’s vision for technological sovereignty and digital interoperability. This initiative is part of President von der Leyen’s policy guidelines and aims to reduce European technological dependence.
The document is a call for evidence, inviting stakeholders and citizens to contribute to defining solutions for more open and interoperable digital ecosystems. For DPOs, this initiative could translate into new data portability standards and interoperability requirements that will affect data management architectures.
The emphasis on open ecosystems suggests a future shift towards greater transparency and user control over their data, requiring more sophisticated privacy-by-design strategies.
Two years of the Digital Services Act: 50 million decisions overturned
The European Commission celebrated two years of the Digital Services Act with significant data: online platforms overturned nearly 50 million decisions regarding user content or accounts, allowing European citizens to exercise their rights online. Of the 165 million moderation decisions challenged through the platforms’ internal mechanisms, 30% were reviewed in favor of users.
The data also highlights the effectiveness of out-of-court resolution bodies: in the first half of 2025, they examined more than 1,800 disputes relating to content on Facebook, Instagram, and TikTok in the EU, overturning platform decisions in 52% of cases. The DSA also introduced a ban on advertising targeted at minors, traceability obligations for sellers on marketplaces, and unprecedented access for researchers and civil society to moderation practices.
For DPOs, these results confirm that the DSA is an effective operational tool that complements the GDPR in protecting digital rights, requiring an integrated compliance view that considers both regulations.
[Source](https://digital-strategy.ec.europa.eu/en/news/ two-years-digital-services-act-allows-50-million-content-moderation-decisions-platforms-be-reversed)
CNIL - FRENCH AUTHORITY
Right to erasure: results of CNIL checks in coordinated European action
The CNIL has published the results of checks conducted in 2025 on the right to erasure as part of the fourth European coordinated action. The French Authority inspected six organizations across different sectors, in part based on complaints received.
The overall assessment is positive: data controllers generally comply with erasure requests. When a refusal is opposed, this is based on the exceptions provided for in the GDPR, such as legal retention obligations. However, some difficulties remain in the practical implementation of the right.
This coordinated approach at the European level demonstrates the importance of cooperation between supervisory authorities and provides DPOs with a framework for regulatory expectations regarding the right to erasure.
Agenda for the plenary session of February 19, 2026
The CNIL has published the agenda for the plenary session of February 19, which includes topics of particular relevance to privacy professionals. Items on the agenda include the examination of a draft resolution on child protection and the communication of the 2025 supervisory activity report with proposals for the 2026 program.
The second part of the meeting addresses integrity checks for operators assisting elderly and disabled persons, including an examination of draft regulations on the verification of criminal records. These developments are particularly significant for the health and social sectors.
For DPOs, these topics highlight the growing focus on protecting vulnerable individuals and the evolution of the Authority’s control strategies.
Second edition of the CNIL-EHESS Prize
The CNIL and the École des Hautes Études en Sciences Sociales (EHESS) are launching the second edition of their prize dedicated to the humanities and social sciences, as part of Privacy Research Day on June 24, 2026, in Paris. Applications are open until March 22.
The prize aims to promote interdisciplinary research on the protection of privacy and freedoms. Articles of 30,000-60,000 characters in French or English are eligible, presenting original results, new typologies, or innovative methodologies. Suggested topics include cross-cutting issues touching on law, sociology, economics, and technology.
This initiative highlights the importance of a multidisciplinary approach to privacy. It offers professionals in the field the opportunity to stay up to date with the latest developments in academic research.
EUROPEAN PARLIAMENT
European Parliament disables AI features on work devices
The European Parliament has disabled artificial intelligence features on MEPs’ and staff’s work devices due to concerns about cybersecurity and data protection. The decision affects writing assistants, automatic summarization, and other built-in AI features that use external cloud services.
The reasoning is clear: many of these features send sensitive data to third-party service providers, creating potential security risks. For DPOs, this case is a concrete example of how to apply the precautionary principle when evaluating new technologies. The Parliament’s decision underscores the importance of conducting thorough data-flow assessments before implementing AI solutions, especially in institutional contexts.
The institution has recommended that MEPs apply similar precautions to personal devices used for work, emphasizing the need for governance policies that cover the entire technology ecosystem.
COUNCIL OF THE EUROPEAN UNION
ST 5611 2026 ADD 3 REV 1
The European Union’s Regulatory Scrutiny Board has issued its opinion on the revision of the Cybersecurity Act. This document represents a crucial step in the evolution of the European cybersecurity regulatory framework. This technical assessment provides strategic guidance for improving existing legislation.
For Data Protection Officers, this revision is particularly important because the Cybersecurity Act interacts directly with the GDPR in defining personal data security standards. The proposed changes could introduce new technical and organizational requirements that will impact compliance processes and privacy impact assessments.
The Board’s opinion is a key element in anticipating future regulatory developments and preparing organizations for new cybersecurity challenges in Europe, requiring careful monitoring of legislative developments.
DIGITAL MARKETS & PLATFORM REGULATION
X challenges €120 million EU fine
Elon Musk’s X has taken the European Commission’s historic €120 million fine for DSA violations to court, claiming “serious procedural errors” and “prosecutorial bias.” The penalty, the first of its kind under the Digital Services Act, concerned insufficient transparency and the misleading design of blue ticks.
Three appeals have been filed with the EU Court of Justice by X Internet Unlimited, X Holdings, XAI Holdings, and, apparently, Musk himself. This case represents the first judicial challenge to a DSA fine and could set crucial precedents on enforcement, penalty calculation, and fundamental rights protections under the 2022 regulation.
For DPOs, this development underscores the importance of maintaining rigorous compliance procedures and detailed documentation to prepare for potential regulatory disputes arising from new digital regulations.
Brussels investigates Shein for child sex dolls
The European Commission has opened a formal investigation against Shein under the DSA, focusing on violations related to the sale of illegal products, including child-like sex dolls that could constitute child pornography. The investigation, which is being treated as a priority, also examines gamification mechanisms and the transparency of recommendation algorithms.
Brussels is scrutinizing the platform’s addictive strategies, such as reward points and gamification, which could harm the well-being of consumers, especially minors. The Commission will also verify whether Shein complies with DSA obligations regarding the transparency of recommendation systems and offers options that are not based on profiling.
This case represents a crucial test of the DSA’s enforcement on global marketplaces, highlighting how platforms must implement stricter controls on content and products marketed in the EU.
EU investigation into xAI for AI-generated sexualized images
The Irish Data Protection Commissioner has launched a “large-scale” investigation into X for creating non-consensual sexualized images generated by XAI’s Grok chatbot, which processed EU user data. The GDPR investigation examines whether X complied with fundamental data protection obligations in controversies over sexualized deepfakes generated in January.
The investigation adds to existing probes under the DSA and follows raids on X’s Paris offices by French and European investigators. The UK ICO has also launched a parallel investigation into X and xAI, citing “serious concerns” about Grok’s use of personal data.
For DPOs, this case highlights the urgent need to assess the privacy risks associated with generative AI and to implement robust safeguards when processing personal data for AI model training or inference.
# EU Commission launches investigation into Shein for dolls and weapons
The European Commission has launched its first formal investigation into Shein after discovering child sex dolls and weapons among the products sold, investigating possible violations of the DSA. The investigation examines whether the platform has adequately assessed and mitigated the risks of selling illegal goods, including unsafe toys and cosmetics.
L The investigation also covers Shein’s “addictive” design, specifically bonus point programs and gamification that could harm consumers’ mental and physical well-being. The Commission is also examining the transparency of the platform’s recommendation system.
Shein has stated that it takes DSA obligations “seriously” and has invested significantly in compliance measures, including age checks to prevent minors from accessing inappropriate content. DSA violations can result in fines of up to 6% of global annual turnover.
[Source](https://www.politico.eu/article/embargo-for-noon -feb-17-eu-launches-probe-into-shein-after-child-like-sex-dolls-scandal)
INTERNATIONAL DEVELOPMENTS
British Council exposes personal data in trans rights dispute
Cornwall Council committed a serious personal data breach by sharing confidential information about ten citizens who had filed complaints against a councilor for discriminatory comments. The breach exposed not only the names of the complainants (including those who had requested anonymity), but also their home addresses, email addresses, and phone numbers.
The incident highlights systemic issues in the management of complaint procedures by public bodies. For DPOs, it is a case in point of how inadequate procedures can lead to massive breaches, particularly when they involve sensitive data of vulnerable citizens. The lack of controls on internal information flows poses a significant risk to any public organization.
France:
National bank account registry breached, 1.2 million users affected.
The French Ministry of Finance suffered a cyberattack that compromised FICOBA, the centralized bank account registry. The attackers used credentials stolen from a public official to access data from 1.2 million accounts, including IBANs, account holder identities, addresses, and tax identification numbers.
The incident paralyzed the system’s operations and represents one of the most serious attacks ever suffered by the French financial infrastructure. The compromise of such a strategic registry raises crucial questions about the security of government databases and the effectiveness of access controls. The case demonstrates how compromised credentials remain the primary attack vector against critical infrastructure.
PayPal: software error exposes sensitive data for six months
PayPal reported a data breach that exposed sensitive personal information, including Social Security numbers, for nearly 6 months. The incident, caused by a software error in the PayPal Working Capital application, affected approximately 100 customers from July to December 2025.
Despite the limited number of users involved, the duration of the exposure and the sensitivity of the compromised data make this incident particularly significant. PayPal implemented immediate corrective measures and is offering free credit monitoring services for two years. The case underscores the importance of thoroughly testing code changes and continuously monitoring systems to detect data-flow anomalies in a timely manner.
PromptSpy: first Android malware to exploit Google’s AI
ESET has discovered PromptSpy, the first Android malware to use Google’s Gemini AI to automate its malicious functions. The malware sends prompts to the artificial intelligence, along with screenshots of the device, and receives detailed instructions to maintain persistence in the system and avoid removal.
This evolution represents a paradigm shift in mobile threats, demonstrating how generative AI can be weaponized to create more adaptive and resilient malware. The malware, mainly distributed in Argentina, uses Android accessibility services to perform remote actions and intercept credentials. The innovative approach could inspire a new generation of more sophisticated and difficult-to-counter threats.
ARTIFICIAL INTELLIGENCE
Red Lines under the EU AI Act: Understanding ‘Prohibited AI Practices.’
The European AI Act has introduced insurmountable “red lines” that prohibit specific artificial intelligence practices deemed too dangerous or unethical. These include: harmful manipulation, social scoring, individual risk assessment, emotion recognition, and real-time biometric identification for law enforcement purposes.
From February 2025, Article 5 became applicable, and as of August 2025, it is fully enforceable, with penalties of up to €35 million or 7% of global annual turnover. For DPOs, this represents a new compliance area that intersects with the GDPR and DSA, requiring an integrated approach to risk assessment and the governance of personal data used in AI systems.
AI Laws in the United States 2023-2025
Between 2023 and 2025, the United States passed 27 AI laws in 14 states, marking a shift toward sector-specific regulations. In 2025, particular attention was paid to AI chatbots, with five states (California, Maine, New Hampshire, New York, and Utah) introducing transparency requirements and security protocols, especially for mental health applications.
The legislative trend has shifted from general framework laws to targeted measures for specific use cases. For DPOs operating globally, this regulatory mosaic requires precise mapping of jurisdictions and their respective obligations, particularly for automated decision-making systems and the processing of sensitive data.
Global AI Summit: 88 Countries Sign Declaration Without Binding Commitments
The fourth global AI summit in New Delhi saw 88 countries, including the US, UK, and EU, sign a declaration that prioritizes the “democratization” of AI over security. The document, promoted by India, emphasizes accessibility and large-scale adoption of the technology, including explicit support for open-source AI.
The summit marks a paradigm shift from the security-focused first meeting in 2023 in the UK. For DPOs, this shift towards accessibility could result in greater proliferation of AI tools in organizations, requiring more robust frameworks for impact assessment and privacy risk management.
Controversial Moments from India’s AI Summit
Notable absences and controversies marked the AI summit in New Delhi. Bill Gates canceled his keynote address due to renewed scrutiny of his past ties to Jeffrey Epstein, while Nvidia’s Jensen Huang was absent due to illness. The timing of the summit, coinciding with the Chinese New Year and the launch of Trump’s Board of Peace, limited high-level participation.
A viral moment was Galgotias University’s “robo-dog fiasco,” which presented a robotic demo that turned out to be misleading. These episodes highlight the challenges of organizing global AI events and the need for DPOs to evaluate the technologies presented critically, distinguishing between marketing and real technical innovation.
EU Criticized for Over-Regulating AI
During the New Delhi summit, the European approach to AI was heavily criticized. Sriram Krishnan, White House advisor on AI, called the AI Act “unfriendly to entrepreneurs “and called on the EU to focus less on governance and more on innovation. Several industry representatives criticized the EU for” shooting itself in the foot” with premature Regulation.
Europe, which in 2023 was a leader in AI safety conversations, now appears isolated on the global stage. For European DPOs, this scenario presents a challenge: maintaining compliance with the strict AI Act while the global ecosystem favors more permissive approaches, potentially creating competitive disadvantages for European organizations.
CYBERSECURITY
CISA: BeyondTrust Vulnerability Exploited in Ransomware Attacks
The CVE-2026-1731 vulnerability in BeyondTrust Remote Support and Privileged Remote Access products is now being actively exploited in ransomware campaigns. The flaw, which allows remote code execution via OS command injection, affected versions up to 25.3.1 for Remote Support and 24.3.4 for Privileged Remote Access.
CISA has added the vulnerability to the KEV catalog with a patch deadline of only three days, highlighting the severity of the situation. Of particular concern is the fact that exploits were already in circulation as of January 31, making CVE-2026-1731 a zero-day for at least a week before its public disclosure on February 6.
For DPOs, this case underscores the importance of accelerated patching procedures for critical systems and the need for up-to-date inventories of privileged remote access systems.
AI-Assisted Threat Actor Compromises 600+ FortiGate Devices
A Russian-speaking cybercriminal group compromised over 600 FortiGate devices in 55 countries using commercial generative AI services. The attack, documented by Amazon Threat Intelligence, occurred between January and February 2026, not through software vulnerabilities but through weak configurations and easily guessed credentials.
The attackers used AI to develop tools,
plan attacks, and generate commands, demonstrating how artificial intelligence is lowering the barriers to entry for cybercrime. The group compromised entire Active Directory environments and extracted credential databases, likely setting the stage for ransomware attacks.
This case highlights the urgent need for organizations to implement multi-factor authentication and robust credential management policies, especially for critical perimeter security systems such as firewalls.
Dell RecoverPoint: Zero-Day Exploited since 2024
A critical vulnerability (CVE-2026-22769, CVSS 10.0) in Dell RecoverPoint for Virtual Machines has been exploited as a zero-day by the Chinese group UNC6201 since mid-2024. The vulnerability stems from hardcoded credentials that allow unauthorized access to the underlying operating system with root privileges.
Google Mandiant has identified less than a dozen impacted organizations, but warns that the full scope of the campaign remains unknown. Attackers used the hardcoded credentials to access the Tomcat Manager, upload web shells, and install BRICKSTORM and GRIMBOLT backdoors for long-term persistence.
The case highlights the importance of thorough security audits for backup and recovery systems, which are often overlooked in risk assessments. DPOs should ensure that these critical systems are included in vulnerability management programs.
[Source](https://thehackernews.com/2026/02/dell-recoverpoint-for-vms-zero-day-cve. html)
Italian Analysis: 600 Firewalls Compromised with AI
Red Hot Cyber’s analysis confirms the severity of the AI-assisted campaign against FortiGate firewalls, highlighting how artificial intelligence is revolutionizing the threat landscape. Attackers have automated reconnaissance, quickly analyzing exposed systems and cracking passwords at scale.
The speed of the attack—600 systems in five weeks—represents a paradigm shift in cybercrime. What once required months of manual labor can now be automated, allowing even groups with limited resources to conduct industrial-scale operations.
For DPOs, this scenario calls for an urgent review of defense strategies, prioritizing automated defenses and implementing security controls to detect and mitigate high-speed automated attacks.
Amazon Confirms: AI Transforms the Threat Landscape
Amazon’s report provides technical details on the AI-assisted campaign, revealing how attackers used AI-generated Python and Go tools to automate the extraction and decryption of firewall configurations. The analyzed code shows characteristics typical of AI: redundant comments, simplistic architecture, and naive JSON handling.
Once devices were compromised, attackers extracted SSL-VPN credentials, IPsec configurations, network topology, and routing information. This data was then used to move laterally across compromised networks using scanners such as gogo and Nuclei to identify HTTP services and domain controllers.
The incident demonstrates how AI is democratizing advanced offensive capabilities, making operations previously reserved for sophisticated APTs accessible to criminals with limited technical skills. DPOs must prepare to face unprecedented volumes and speeds of attacks.
TECH & INNOVATION
Microsoft: Office bug exposes confidential emails to Copilot AI
Microsoft has confirmed a bug that, for weeks, allowed its Copilot AI to access and summarize confidential customer emails without authorization. The issue, identified as CW1226324, affected messages with confidentiality labels, bypassing organizations’ data loss prevention (DLP) policies.
The bug is particularly relevant to DPOs, as it highlights the risks of integrating AI systems with sensitive data. Despite protective measures in place, Microsoft’s language model processed confidential information, potentially violating GDPR. The timing of the discovery—after weeks of exposure—also raises questions about transparency and notification times.
Microsoft began rolling out the fix in February but did not disclose the number of customers affected. The incident underscores the importance of ongoing audits of AI systems and clear policies for data management in integrated cloud solutions.
SCIENTIFIC RESEARCH
Selection of the most relevant papers of the week from arXiv on AI, Machine Learning, and Privacy
Machine Unlearning and Privacy
MeGU: Machine-Guided Unlearning with Target Feature Disentanglement
A new technique for implementing the “right to be forgotten” in ML models, addressing the trade-off between effective deletion of target data and maintaining model utility. Particularly relevant for DPOs who need to manage GDPR deletion requests while maintaining system performance. arXiv
Federated Learning Privacy-Preserving
U-FedTomAtt: Ultra-lightweight Federated Learning with Attention for Tomato Disease Recognition
An ultra-lightweight federated learning approach for agricultural applications that reduces communication overhead and privacy risks by avoiding the centralization of raw data. Demonstrates how FL can be effectively implemented in vertical sectors with resource and privacy constraints. arXiv
Catastrophic Forgetting Resilient One-Shot Incremental Federated Learning
Framework for incremental federated learning that handles geographically distributed privacy-sensitive data flows. Addresses compliance challenges in big-data scenarios where centralization is problematic for privacy and latency, offering solutions for continuous learning. arXiv
PII Identification and De-anonymization
Discovering Universal Activation Directions for PII Leakage in Language Models
UniLeak framework for identifying activation directions that control the leakage of personal information in LLMs. Crucial for compliance officers who need to assess and mitigate PII exposure risks in language models used in organizations. arXiv
Large-scale online deanonymization with LLMs
Study demonstrating the ability of LLMs to deanonymize users on a large scale using pseudonymous online profiles. Highlights significant risks to traditional anonymization techniques and the need to reevaluate data protection strategies in the AI era. arXiv
Differential Privacy and Security
Efficient privacy loss accounting for subsampling and random allocation
Improvements in privacy loss accounting for sampling schemes in differentially private optimization contexts. Provides more accurate tools for quantifying privacy guarantees, essential for compliance with regulations that require mathematical demonstrations of protection. arXiv
Exact Certification of Data-Poisoning Attacks Using Mixed-Integer Programming
Verification framework for data poisoning attacks with complete guarantees during neural network training. Enables precise certification of robustness against data manipulation, a crucial step in validating the integrity of AI systems in high-risk environments. arXiv
Fail-Closed Alignment for Large Language Models
Proposes a fail-closed principle for robust alignment in LLMs, identifying structural weaknesses in current rejection mechanisms. Fundamental for AI governance, which must ensure safe behavior even under jailbreaking attacks and prompt manipulation. arXiv
AI ACT IN A NUTSHELL - Part 9
Article 13 - Transparency and provision of information to deployers
After analyzing the automatic registration requirements under Article 12 in the last episode, we now move on to one of the most significant pillars of the AI Act Regulation for operational governance: Article 13, dedicated to transparency and the provision of information to deployers of high-risk AI systems.
The key principle: sufficient transparency for informed use
Article 13 stipulates that high-risk AI systems must be designed and developed so that their operation is sufficiently transparent, enabling deployers to interpret the system’s output and use it appropriately. This requirement is not limited to a formal documentation obligation: it imposes a genuine design oriented towards comprehensibility.
Transparency must be accompanied by appropriate instructions for use, in an appropriate format, including concise, complete, accurate, and clear information that is relevant, accessible, and understandable to deployers. The instructions must enable deployers to understand the system’s capabilities and limitations and make informed decisions about its use.
Minimum content of instructions for use
The Regulation specifies in detail what the instructions provided to deployers must contain. In particular: the identity and contact details of the supplier, the characteristics, capabilities, and limitations of the system’s performance, including the level of accuracy for specific groups of people, modifications predetermined by the supplier, human oversight measures, the necessary computational and hardware resources, as well as the expected lifespan of the system and any necessary maintenance measures.
Particularly relevant is the obligation to specify the levels of accuracy, robustness, and cybersecurity against which the system has been tested and validated, together with the foreseeable circumstances that could pose risks to health, safety, or fundamental rights.
The challenge of interpretability
A crucial aspect of Article 13 concerns the need for deployers to correctly interpret the system’s outputs. That goes beyond simple algorithmic transparency: it requires that the information provided be calibrated to the level of technical expertise of the intended recipients. A system used by medical personnel will require a different type of explanation than one used in the financial sector or human resources management.
This need for interpretability is directly linked to the right to explanation provided for in the GDPR for automated decisions, creating a regulatory bridge between the two regulations that DPOs must manage in an integrated manner.
Practical implications for organizations
For DPOs, compliance officers, and legal advisors, Article 13 entails a thorough review of how they communicate with users of AI systems. Organizations operating as deployers will need to verify that they have received complete and understandable documentation from the supplier. In contrast, suppliers will need to invest in producing instructions that go well beyond the traditional technical manual.
Transparency thus becomes an ongoing design requirement, not a one-time compliance exercise. Every system update, every change in the terms of use, or every change in application context requires a corresponding update to the information provided to deployers.
Penalties
Failure to comply with transparency obligations may result in administrative fines of up to €15 million or, for companies, up to 3% of the previous year’s global annual turnover. The completeness and adequacy of the information provided will be key factors in the competent authorities’ compliance assessments.
Next week, in Part 10, we will analyze Article 14 on human oversight, one of the most innovative and controversial requirements of the Regulation, which mandates meaningful human control over high-risk AI systems.
Noteworthy events and meetings
Meeting Data Protection Working Group, Council (February 27, 2026)
EDPB | Info
117th Plenary meeting (March 18, 2026)
EDPB | Info
Blog post: Advancing into Practice: Third Meeting of the AI Act Correspondents Network
EDPS | Info
European Union endorses Leaders’ Declaration at AI Summit in India
European Commission | Info
Executive Vice-President Virkkunen in India for summit on artificial intelligence
European Commission | Info
Conclusion
This week highlights a two-pronged approach by European authorities: on the one hand, direct enforcement of specific violations, and on the other, the development of regulatory tools that anticipate future challenges. The two plans feed into each other, creating an increasingly mature and operational regulatory landscape.
The Italian Data Protection Authority has acted decisively on three distinct but converging fronts. The sanction imposed on eCampus for facial recognition in online courses and the negative opinion on the body cameras of the Pescara local police share the same logic: biometric and surveillance technologies require enhanced safeguards and cannot be implemented without a solid legal basis, an adequate DPIA, and the evaluation of less invasive alternatives. The Pescara case adds a critical dimension: the risk of data transfer to third countries through US providers, in violation of Directive 2016/680, demonstrates how the technological choices of public bodies have immediate implications for the protection of fundamental rights. In constructive contrast, the favorable opinion on the AGID guidelines for service accessibility and the new provisions on health screening demonstrate an authority capable of balancing protection and innovation, providing DPOs with clear operational frameworks for implementing privacy-friendly public services.
The EDPB has released its 2025 coordinated action report on the right to erasure, involving 32 authorities and 764 data controllers. The seven recurring challenges identified—from inadequate internal procedures to problematic backup management—offer a valuable roadmap for privacy professionals, while signaling that coordinated European enforcement is achieving unprecedented systematic analysis capabilities. The report on anonymization and pseudonymization techniques, along with the response to spyware abuse, completes a picture of growing attention to the most complex technical challenges of data protection.
The European Parliament’s decision to disable AI capabilities on work devices represents an illuminating paradox: the institution that shaped the AI Act applies the precautionary principle in its own daily operations, recognizing that sending sensitive data to external cloud services creates concrete cybersecurity risks. This institutional precedent provides DPOs with a powerful argument for advocating in-depth assessments before adopting AI tools in the workplace.
The two-year track record of the Digital Services Act—with 50 million moderation decisions overturned in favor of users—confirms the DSA’s effectiveness as an operational tool complementary to the GDPR. The investigation into Shein for additive design, illegal products, and algorithmic opacity, along with X’s challenge to the €120 million fine and the Irish investigation into xAI for sexual deepfakes, demonstrate how the European regulatory ecosystem is converging on multiple fronts to hold global platforms accountable.
On the cybersecurity front, the AI-assisted campaign targeting over 600 FortiGate firewalls, documented by Amazon Threat Intelligence, marks a paradigm shift: artificial intelligence is democratizing offensive capabilities once the prerogative of sophisticated APT groups. These compressing operations previously took months into weeks. The Microsoft bug that exposed confidential emails to Copilot AI and the discovery of PromptSpy, the first Android malware to exploit Gemini, confirm that the integration of AI into systems and the rise of threats are advancing faster than countermeasures.
For DPOs and compliance officers, these developments converge on a common need: governance frameworks that integrate the GDPR, AI Act, DSA, and cybersecurity regulations into a unified approach. The era of siloed compliance is coming to an end, giving way to the need for impact assessments that simultaneously consider data protection, cybersecurity, algorithmic transparency, and fundamental rights.
How will the professional skills of DPOs be redefined in a landscape where understanding AI architectures and cybersecurity dynamics becomes as important as legal knowledge of the GDPR?
📧 Edited by Nicola Fabiano
Lawyer - Fabiano Law Firm
🌐 Studio Legale Fabiano: https://www.fabiano.law
🌐 Blog: https://www.nicfab.eu
🌐 DAPPREMO: www.dappremo.eu
