Introduction: an increasingly complex regulatory framework

On October 30, 2025, the European Parliament published a [fundamental study](https://www.europarl.europa.eu/thinktank/en/document/ECTI_STU (2025)778575) commissioned by the Committee on Industry, Research and Energy (ITRE) that analyzes the interaction between the AI Act and the European Union’s digital legislative framework. The study, prepared by Hans Graux, Krzysztof Garstka, Nayana Murali, Jonathan Cave, and Maarten Botterman, represents the first systematic and in-depth analysis of the overlaps, gaps, and complexities generated by the intersection of multiple regulatory instruments in the European digital ecosystem.

The AI Act, adopted in June 2024 as the first comprehensive regulatory framework on artificial intelligence, does not operate in isolation but is part of an already dense and complex legislative fabric that includes the GDPR, the Data Act, the Digital Services Act (DSA), the Digital Markets Act (DMA), the Cyber Resilience Act (CRA), the NIS2 Directive, and other instruments. While each of these acts pursues legitimate and targeted objectives, the study raises crucial questions about their combined impact on the competitiveness, coherence, and innovation capacity of the European artificial intelligence ecosystem.

The regulatory logic of the AI Act: a risk-based approach

The study begins with an in-depth analysis of the structural and philosophical rationale behind the AI Act, which is based on a risk-based approach articulated on several levels:

  1. Prohibited practices: absolute ban on specific uses of AI considered unacceptable
  2. High-risk systems: stringent obligations for systems posing significant risks
  3. Transparency obligations: specific requirements for certain use cases
  4. General-purpose AI models (GPAI): autonomous obligations, including those for models with systemic risk

The AI Act draws heavily on the European product safety legislation model (New Legislative Framework), but adds innovative requirements relating to fundamental rights impact assessments (FRIA), traceability, and oversight.

Structural tensions highlighted by the study

The study identifies several structural tensions in the architecture of the AI Act:

Extension of product safety logic: applying traditional compliance mechanisms to less determinable domains, such as human rights compliance, proves problematic and difficult to assess using conventional methods.

Complex chains of responsibility: The applicability of the legislation to both suppliers and deployers of AI systems imposes complex chains of responsibility, which are particularly burdensome for SMEs and non-specialist users.

Ambiguity in classification: The definition and classification of high-risk systems, based on lists contained in annexes and subjective assessments of damage potential, introduces legal uncertainty.

Promotion of innovation vs. risk mitigation: Although the Act promotes innovation (through regulatory sandboxes, exemptions for open source, and lighter regimes for non-systemic GPAIs), the overall framework remains primarily focused on damage mitigation, often through prescriptive obligations.

Critical interactions with other European digital regulations

The central part of the study analyzes in detail the interactions and overlaps between the AI Act and other European digital regulations, highlighting significant frictions and compliance challenges.

AI Act and GDPR: overlaps in impact assessments

The most complex and problematic interaction emerges with the GDPR. The AI Act introduces requirements for fundamental rights impact assessments (FRIAs) in cases that often also trigger data protection impact assessments (DPIAs) under the GDPR.

Key issues identified:

  • Different scopes, oversight, and procedural requirements: FRIA and DPIA differ substantially, creating duplication and uncertainty
  • Redundant transparency and logging obligations in both regimes
  • Ambiguity in the management of data subjects’ rights: uncertainty about how controllers and suppliers should manage rights of access, rectification, and erasure when personal data is embedded in complex AI models
  • Overlapping roles: AI system providers may simultaneously be controllers or processors under the GDPR, resulting in a cumulative set of obligations that is difficult to manage

AI Act and Data Act: providers as data holders

While the AI Act regulates the design and deployment of AI systems, the Data Act ensures access to and portability of data generated by related products and services.

AI providers may be data holders under the Data Act and simultaneously subject to the obligations of the AI Act. The cumulative compliance burden is significant, especially in real-world testing contexts or those involving data transfers to third countries.

AI Act and Cyber Resilience Act: overlapping cybersecurity standards

High-risk AI systems must meet specific cybersecurity standards that may overlap with those imposed by the Cyber Resilience Act (CRA), which introduces mandatory cybersecurity requirements for all digital products.

Although the CRA provides for presumptions of compliance with the AI Act when its requirements are met, the partial alignment between the two regulatory frameworks leaves room for interpretative uncertainty, particularly about:

  • Vulnerability management
  • Incident reporting
  • Security updates
  • Certifications

AI Act, DSA, and DMA: Platforms and gatekeepers in the age of AI

Intermediary services (such as online platforms and search engines) that deploy or provide AI systems and models face overlapping increased transparency obligations.

Huge online platforms (VLOPs) and search engines (VLOSEs) may face simultaneous risk assessment obligations under both the AI Act and the DSA, especially when GPAI models are provided through the intermediary service.

Areas of overlap identified:

  • AI-generated or manipulated content, both legal and illegal
  • Liability of intermediaries
  • Researcher access to data
  • Obligations on recommendation systems
  • AI-assisted content moderation

About the Digital Markets Act (DMA), provisions on data access, interoperability, and anti-self-preferencing could be relevant to AI APIs or foundation models offered by gatekeepers. However, AI systems are not yet designated as core platform services under the DMA, limiting the scope of these interactions in practice.

AI Act and NIS2: cybersecurity and supply chain risk management

Entities under the NIS2 Directive that develop or deploy AI systems must comply with both cybersecurity requirements and AI-specific risk management frameworks.

The overlap is particularly pronounced in:

  • Incident reporting obligations: duplication between NIS2 and the AI Act
  • Supply chain risk governance: complex chains of responsibility
  • Cyber risk management measures: potential duplication of technical and organizational requirements

Other digital regulations in the ecosystem

The study also analyzes interactions with:

  • DORA (Digital Operational Resilience Act): for financial institutions using AI systems
  • Cybersecurity Act: security certifications and standards
  • Digital Identity Framework: AI-based authentication and identification systems
  • Platform-to-Business Regulation: algorithmic transparency for online platforms

Governance and institutional complexity: the role of the AI Office

The AI Act introduces an innovative governance architecture centered on the European AI Office (AIO), which presents both significant opportunities and risks.

Mandate and responsibilities of the AI Office

The AI Office is tasked with:

  1. Overseeing GPAI models: supervising general-purpose AI models, including those with systemic risk
  2. Supporting regulatory sandboxes: facilitating innovation in controlled environments
  3. Coordinating enforcement: harmonizing the application of the regulation across Member States

Overlap with existing authorities

The study highlights how the role of the AIO overlaps with that of existing regulators:

  • Data protection authorities (DPAs)
  • Market surveillance bodies
  • European Data Protection Board (EDPB)
  • National competent authorities for various digital regulations

The risk of regulatory fragmentation is significant, particularly in areas where competing competences lack robust coordination mechanisms.

Concerns about operational independence

The institutional position of the AIO within the Commission raises concerns about its operational independence and its ability to fulfill its mandate without a dedicated legal personality or protected resources.

The study recommends strengthening the AI Office’s autonomy by granting it independent agency status, similar to ENISA or the EDPS.

Strategic considerations: European competitiveness at risk?

The study raises crucial questions about Europe’s global competitiveness in AI, highlighting that the cumulative effect of EU digital legislation, while rooted in legitimate policy objectives, risks disproportionately burdening European innovators.

A long-standing concern: legislative hyperproduction in the digital sphere

The conclusions of this European Parliament study fully confirm the concerns I have repeatedly raised over the past few years at numerous conferences and public events: the European Union is producing an excess of digital legislation that risks stifling innovation under the weight of regulatory complexity.

I have always argued that we are facing a real legislative hyperproduction: it is not a question of disputing the need to regulate emerging technologies such as artificial intelligence, but of highlighting how the multiplication of overlapping regulatory instruments creates a tangle of obligations that is difficult to manage, especially for SMEs, startups, and operators that do not have large legal departments.

The findings of this study—the overlaps between FRIA and DPIA, the duplication of transparency and logging obligations, the fragmented chains of responsibility among multiple authorities, the interpretative uncertainty generated by the intersection of separately designed regulations—come as no surprise to those who, like me, have been following the evolution of the European digital regulatory framework for years.

The layered and sectoral approach adopted by the EU—a new regulation for every new technological challenge—inevitably generates the systemic complexity that the European Parliament study rigorously documents today. It is no longer enough to think about GDPR compliance, or AI Act compliance, or DSA compliance: we need to think about compliance with the entire digital regulatory ecosystem, which represents a disproportionate burden and a clear competitive disadvantage for Europe.

The study’s recommendations – particularly the long-term ones that call for a rethinking of the digital legislative framework from a more integrated perspective – are moving in the direction I have long hoped for: greater consistency, proportionality, and simplification are needed. Europe must choose whether it wants to be a global regulator or also a global digital innovator. With the current regulatory architecture, these two objectives appear increasingly at odds.

The European investment gap

The document emphasizes that this concern is particularly relevant given the relative underperformance of the EU in terms of investment and innovation in AI compared to the United States and China:

  • Private investment in AI: significantly lower than in the US and China
  • Number of AI startups: lower concentration than other global ecosystems
  • Talent and research: brain drain to other markets with more streamlined regulatory environments

The balance between protection and innovation

While the AI Act outlines a vision for human-centric and trustworthy AI, its intersection with other digital regulations is not always calibrated for agility, scalability, or global competitiveness.

The study highlights the importance of finding a balance between:

  • Protection of fundamental rights and security
  • Facilitation of innovation and economic competitiveness
  • Regulatory consistency and legal certainty
  • Proportionality of regulatory burdens

Recommendations: short, medium, and long term

The study presents a comprehensive set of recommendations stratified over three time horizons, specifying that they should not be understood as suggestions for immediate action but as reflections for future regulatory or policy actions.

Short-term recommendations: optimization of the application

Implementable without legislative changes, these focus on optimizing the application of the AI Act in relation to other regulations:

  1. Strengthen interaction and coordination between regulators: particularly for those outside the product compliance supervision ecosystem (data protection authorities, bodies responsible for the Data Act, DSA, etc.).

  2. Coordination of guidelines at EU level: ensure that guidance is coordinated where possible at the EU level to mitigate the risk of inappropriate fragmentation.

  3. Harmonization of interpretative solutions: ensure that solutions deemed appropriate in one Member State are accepted as compliant in all others

  4. Best practice sharing mechanisms: create exchange networks between national authorities for common enforcement issues

Medium-term recommendations: targeted legislative adjustments

Focus on optimizing the AI Act through minor legislative changes without requiring fundamental changes in philosophy or overall approach:

  1. Harmonization of impact assessments: integration or at least procedural alignment between FRIA (AI Act) and DPIA (GDPR)

  2. Clarification of roles and responsibilities: more precise definitions of chains of responsibility between suppliers, deployers, and other actors

  3. Simplification for SMEs: proportionate regimes for small and medium-sized enterprises, including technical assistance and standardized templates

  4. One-stop-shop mechanisms: single point of contact for cross-regulatory compliance

  5. Strengthening the AI Office: provision of greater resources, autonomy, and legal personality

Long-term recommendations: rethinking the digital legislative framework

Fundamental reflections on future digital legislation in the EU from an ideal perspective over a time horizon of approximately 20 years, without considering the ‘legacy’ of existing regulations:

  1. Unified legislative approach: consider an integrated Digital Code that harmonizes common obligations on cross-cutting issues (transparency, security, user rights)

  2. Strengthened principle of proportionality: systematic application of cost-benefit assessments for regulatory burdens

  3. Regulatory agility: faster review and adaptation mechanisms for evolving technologies

  4. Simplified governance: streamlining the institutional architecture to avoid overlaps and gaps in competence

  5. Global level playing field: consider the competitive impact of EU regulations compared to other legal systems

For lawyers, DPOs, compliance officers, and organizations operating in the AI ecosystem, this study represents a fundamental roadmap for understanding the complexity of the emerging regulatory framework.

The race for standards: CEN CENELEC accelerates to support the AI Act

The regulatory complexity highlighted by the European Parliament study is also particularly evident in the area of technical standardization, a crucial element for the operational implementation of the AI Act and other digital regulations.

On October 23, 2025, just one week before the study was published, CEN and CENELEC announced an exceptional decision to accelerate the development of European standards in support of the AI Act, confirming the urgency and pressure that the regulatory system is generating.

Extraordinary measures to meet deadlines

During the joint meeting of the Technical Boards on October 14-16, 2025, CEN and CENELEC adopted an *exceptional package of measures * to accelerate the delivery of key standards developed by CEN-CLC/JTC 21 “Artificial Intelligence” and the deliverables required by Standardization Request M/593 (and its Amendment M/613).

The extraordinary measures include:

  • Creation of a small drafting group: composed of experts already active, to finalize the six most delayed drafts before their submission to the Working Groups
  • Stringent time target: to ensure the availability of standards by Q4 2026
  • Balance between urgency and inclusiveness: decision taken in consultation with institutional partners and Annex III organizations

The dilemma between speed and quality of the process

This exceptional decision reveals a fundamental tension in the system: on the one hand, the urgency of providing operational technical standards to support the implementation of the AI Act (which is gradually coming into force), and on the other, the need to maintain the principles of inclusivity, transparency, and consensus that characterize European standardization.

CEN and CENELEC emphasize that this is a temporary and exceptional measure, but the very need to deviate from normal standardization processes underscores the time pressure created by the overlap of multiple digital regulations, each with its own deadlines and requirements.

The importance of standards in managing complexity

Technical standards are an essential tool for operationalizing the abstract requirements of regulations. In the context of the AI Act, standards are particularly critical for:

  1. Defining common risk assessment methodologies for high-risk AI systems
  2. Specifying technical requirements for robustness, accuracy, and cybersecurity
  3. Providing presumptions of conformity: those who follow harmonized standards benefit from the presumption of conformity with regulatory requirements
  4. Harmonizing interpretations across different Member States and sectors

Among the key deliverables currently under development, prEN 18286 Quality Management Systems is particularly important, as it will form part of the future conformity assessment architecture under the AI Act.

The forced acceleration of the standardization process empirically confirms what the European Parliament’s study highlighted: the proliferation of interconnected digital regulations generates systemic pressure that affects all levels of the regulatory ecosystem, from legislative governance to technical standardization.

The fact that CEN-CLC/JTC 21 has gathered “the widest participation of stakeholders ever involved in European AI standardization” is a positive sign of engagement, but also of the complexity of the interests at stake: industry, SMEs, academia, NGOs, and Annex III organizations must find technical compromises on regulations that overlap and interact in ways that are not always predictable.

Risks of an accelerated process

The adoption of exceptional measures to accelerate standardization entails risks that must be carefully monitored:

  • Less effective inclusiveness: small drafting groups may not capture all relevant perspectives
  • Technically incomplete standards: time pressure can lead to standards that require frequent revisions
  • Misalignment with technological evolution: standards developed quickly risk being obsolete by the time they are published
  • Overlap with other standards: insufficient coordination with standards on cybersecurity (CRA), data quality (Data Act), risk management (NIS2)

The CEN CENELEC decision, while necessary, underscores the importance of the coordination recommendations contained in the European Parliament study: only an integrated and harmonized approach can prevent the technical and regulatory level of standards from replicating the fragmentation evident at the legislative level.

Suggested practical actions

  1. Integrated regulatory mapping: conduct systematic analyses of all regulations applicable to your AI systems/services

  2. Cross-functional compliance governance: set up teams that integrate expertise on the AI Act, GDPR, cybersecurity, and digital law

  3. Standardization of impact assessments: develop integrated frameworks covering FRIA, DPIA, and other required assessments

  4. Monitoring of authority guidance: closely follow the guidelines of the AI Office, EDPB, ENISA, and other relevant authorities

  5. Engagement with regulatory sandboxes: to test innovative approaches in controlled contexts

  6. Coordinated advocacy: contribute to consultation processes to highlight practical enforcement issues

Conclusions: towards a coherent and competitive regulatory ecosystem

The European Parliament study is an essential contribution to the debate on AI regulation in Europe, offering the first systematic analysis of the interactions between the AI Act and the broader EU digital legislative framework.

The main conclusions are clear:

  1. Regulatory complexity is objective: overlaps between the AI Act, GDPR, Data Act, DSA, DMA, CRA, and other regulations create a significant cumulative burden

  2. Governance fragmentation is a risk: the multiplicity of competent authorities urgently requires strengthened coordination mechanisms

  3. European competitiveness is at stake: the balance between protecting rights and facilitating innovation must be constantly recalibrated

  4. An evolutionary approach is needed: short-, medium-, and long-term recommendations outline a pragmatic path for optimizing the regulatory framework

The central message of the study is that, while the * European ambition for trustworthy and human-centric AI* is legitimate and necessary, its realization requires a regulatory framework that is not only protective but also coherent, proportionate, and competitiveness-oriented.

The AI Act undoubtedly marks a milestone in global technology governance. Still, its success will depend on the EU’s ability to effectively manage its interactions with the existing digital legislative ecosystem and to adapt dynamically to technological evolution.

For legal professionals and organizations, the study offers an indispensable compass for navigating the growing complexity of European digital law. For policymakers, it represents a call to action to ensure that Europe can compete effectively in the age of artificial intelligence without compromising its fundamental values.


References


Note: This article is based on the study published by the European Parliament in October 2025. The opinions expressed in the study are the sole responsibility of the authors and do not necessarily represent the official position of the European Parliament.


#AIAct #EuropeanParliament #GDPR #DataAct #DSA #DMA #CyberResilienceAct #NIS2 #AIOffice #DigitalLegislation #ArtificialIntelligence #AIRegulation #RegulatoryComplexity #AICompliance #ITRE #GPAI #FRIA #DPIA #AIStandardization #CENCENELEC #EuropeanCompetitiveness #DigitalEurope #TechRegulation #DataProtection #AIPrivacy #DigitalSingleMarket #EULaw #TrustworthyAI