Digital Omnibus on AI: what has changed in the European Parliament’s Draft Report compared to the Commission’s proposal
This article continues the series dedicated to the Digital Omnibus Package launched on this blog in September 2025. In the first article, we examined the context and motivations behind the Commission’s initiative during the consultation phase. With the publication of the formal proposal on November 19, 2025, we analyzed in detail the changes to the AI Act contained in COM(2025) 836 and, in a separate article, the changes to the GDPR and cookie rules.
The dossier is now entering a new phase.
On February 5, 2026, the joint IMCO and LIBE committees of the European Parliament adopted the Draft Report (PE782.530v01-00) on proposal COM(2025) 836 — the Digital Omnibus on AI — signed by co-rapporteurs Arba Kokalari (EPP) and Michael McNamara (Renew). The text contains 24 amendments to the Commission’s proposal and represents the Parliament’s first official position on this dossier.
We analyzed the Draft Report by comparing it directly with the Commission’s proposal. In this article, we illustrate the changes introduced by the Parliament, grouping them by subject area, and the reasons behind them in light of the co-rapporteurs’ Explanatory Statement.
The underlying philosophy: legal certainty instead of discretionary flexibility
The Explanatory Statement accompanying the Draft Report clarifies the co-rapporteurs’ position unequivocally. Kokalari and McNamara share the Commission’s goal of simplifying the implementation of the AI Act, but disagree on the method. The Commission had built the postponement of deadlines for high-risk systems around a flexible mechanism, centered on its own discretionary decision. Parliament replaces that mechanism with fixed dates. The statement makes this clear: the report suggests “to replace the Commission’s proposal of linking the date of application to a decision by the Commission with a set timeline of 2 December 2027 for Annex III systems and 2 August 2028 for Annex I systems”.
This choice reflects a precise legal principle: those who must comply with a rule need to know with certainty when that rule will apply, not wait for a decision that could come at any time.
Deadlines for high-risk systems: the key amendment
The most significant change in the entire Draft Report concerns recital 22 and Article 113(3) of the AI Act (Amendments 6 and 24).
The Commission had proposed a two-tier system. First, the entry into force of the obligations of Chapter III (Sections 1, 2, and 3) would depend on the Commission’s adoption of a decision confirming the availability of measures to support compliance—harmonized standards, common specifications, guidelines. Following that decision, the obligations would enter into force with a transitional period of 6 months (for Annex III systems) or 12 months (for Annex I systems). In the absence of such a decision, the deadlines would still be December 2, 2027, and August 2, 2028.
Parliament removes the entire mechanism of the Commission decision. Amendment 24 rewrites the point so that the deadlines are set directly in the legislative text: December 2, 2027, for systems classified as high risk under Article 6(2) and Annex III; August 2, 2028, for those under Article 6(1) and Annex I: no interim decision, no discretion.
To complement this choice, the new recital 22a (Amendment 7) introduces a positive commitment: the Commission is required to ensure that compliance support measures — standards, common specifications, guidelines — are prepared “in two time” to ensure timely and effective implementation. In other words, Parliament sets the deadline and asks the Commission to comply, not the other way around.
Amendment 6 to recital 22 adjusts the explanatory text accordingly, removing any reference to the decision-making mechanism and replacing it with the new dates in their direct wording.
Deadlines compared: Commission vs. Parliament
The table below highlights the deadlines for high-risk systems, comparing the mechanism proposed by the Commission (COM(2025) 836) with the Parliament’s position in Draft Report PE782.530. The original deadline in the current AI Act for all Chapter III obligations is August 2, 2026.
| Deadline | Commission proposal — COM(2025) 836 | Parliament Draft Report — PE782.530 |
|---|---|---|
| High-risk systems — Annex III (Art. 6(2), Chapter III Sections 1-3) | 6 months after the Commission’s decision confirming the availability of compliance support measures; in any case, no later than December 2, 2027 | December 2, 2027 — fixed date, without any intermediate decision by the Commission (Am. 6, 24) |
| High-risk systems — Annex I (Art. 6(1), Chapter III Sections 1-3) | 12 months after the Commission’s decision; in any case, no later than August 2, 2028 | August 2, 2028 — fixed date, without any interim decision by the Commission (Am. 6, 24) |
| High-risk classification (Art. 6(5)) | Subject to the same conditional referral mechanism | Excluded from referral: applies to the original deadline of August 2, 2026 (Am. 24: “except Article 6(5)”) |
| Compliance support measures (harmonized standards, common specifications, guidelines) | Necessary prerequisite for the Commission’s decision activating the deadlines | The Commission must prepare them “in two stages” to ensure timely implementation — they are no longer a condition but an autonomous obligation (new rec. 22a, Em. 7) |
The substantial difference is not in the final dates — which remain the same — but in the mechanism: the Commission envisaged a rolling deadline linked to its own discretionary decision, with 2027 and 2028 as the only external limits. Parliament directly sets those dates as definite and binding deadlines and transforms the preparation of support measures from a condition for activation to an obligation on the Commission.
AI Literacy: the obligation remains with suppliers and deployers
In our analysis of the Commission’s proposal, we described the shift from a direct obligation on suppliers and deployers to an institutional responsibility as a “radical change of approach.” Parliament partially corrects that course.
Parliament intervenes on this point with two separate amendments (Amendments 8 and 9) that substantially modify the structure of Article 4.
Amendment 8 reintroduces the obligation on suppliers and deployers, with a significant terminological change: the verb changes from “ensure” (as in the current text of the AI Act) to “promote”. That is no longer an obligation to achieve a result but an obligation of conduct. Suppliers and deployers must take measures to promote AI literacy among their staff, considering technical knowledge, experience, the context of use, and the people concerned.
Amendment 9 adds a new paragraph 1a that assigns a supporting role to the Commission and Member States: they must promote AI literacy in society and support suppliers and deployers in fulfilling their obligations.
The rewriting of recital 5 (Amendment 3) consistently adapts the reasoning: the passage in which the Commission described the AI literacy obligation as an excessive compliance burden for smaller businesses has been removed. The new text recognizes that different activities require different skills and that, to make the AI literacy requirement effective, the obligation on operators must be combined with institutional support.
That is a different balance from both the current text of the AI Act and the Commission’s proposal: the obligation on private individuals is not eliminated, but its content is mitigated, and an institutional support role accompanies it.
Special categories of personal data for bias detection: enhanced safeguards
In our previous article on COM(2025) 836, we pointed out, among the open issues, the risk that the generic wording of the “appropriate safeguards” in the new Article 4a could lead to divergent interpretations and possible abuses. The Parliament addresses this very point with three calibrated amendments (Amendments 4, 10, and 11) that raise the safeguards.
Amendment 10 adds an adverb that may seem marginal but has significant legal weight: processing is permitted “to the extent strictly necessary” rather than simply “to the extent necessary”. The inclusion of “strictly” raises the standard of proportionality, bringing it closer to the EU Court of Justice’s criterion for the processing of sensitive data.
Amendment 11 adds the word “exceptionally” to paragraph 2 of Article 4a, which extends the possibility of processing to suppliers and deployers of AI systems other than high-risk ones. The Commission already provided for the conditions of necessity and proportionality, but the Parliament reinforces the exceptional nature of this extension.
Amendment 4 to recital 6 adds a significant passage: the provisions of Article 4a must apply “from the entry into application of this Regulation” to enable suppliers of high-risk systems to legitimately carry out bias detection, monitoring, and mitigation activities in preparation for compliance with the requirements of Article 10(2)(f) and (g). That is a transitional provision that avoids a regulatory vacuum. Since the deadlines for high-risk systems have been postponed, without this provision, providers would have no legal basis to process sensitive data during the preparation period.
AI Office supervision: more precisely defined powers
The Commission proposed to give the AI Office exclusive competence for the supervision of certain AI systems, in particular those based on GPAI models where the model and system provider are the same, and those integrated into very large online platforms (VLOPs) and very large search engines (VLOSEs).
Amendment 22 amends Article 75(1) by adding an explicit exclusion: AI systems put into service or used by Union institutions, bodies, offices, and agencies subject to the supervision of the European Data Protection Supervisor (EDPS) pursuant to Article 74(9) of the AI Act remain outside the competence of the AI Office. This delimitation is consistent with the EU data protection institutional framework: the EDPS retains its supervisory competence over European institutions without interfering with the AI Office’s new prerogatives.
At the same time, Amendment 5 (new recital 18a) introduces a principle that is missing from the Commission’s proposal: the AI Office must have adequate human, financial, and technical resources to effectively carry out its tasks, particularly in light of the new powers conferred by the Omnibus. In my analysis of COM(2025) 836, we identified the AI Office’s operational capacity as one of the critical issues still to be resolved. Parliament addresses this with a clear political signal: it accepts centralization but demands that effective resources accompany it.
Regulatory sandboxes: more guarantees on data protection
Parliament introduces three changes to the provisions on regulatory sandboxes, shifting the emphasis from mere simplification to the protection of rights.
Amendment 17 extends priority access to the AI Office’s sandbox, which the Commission reserved for SMEs only, to small- and mid-cap companies and startups as well — broadening the scope of beneficiaries.
Amendment 18 is perhaps the most incisive: it requires the AI Office to ensure that, where innovative systems tested in the sandbox involve the processing of personal data, the competent data protection authorities are “associated with the operation” of the sandbox and “involved in the supervision” of the aspects within their competence, in accordance with the GDPR, Regulation 2018/1725, and Directive 2016/680. The Commission had not envisaged anything similar at the EU level for the sandbox.
Amendment 20 extends this logic to the governance rules of sandboxes: the Commission’s implementing acts on operating procedures will also have to regulate the involvement and supervision of data protection authorities.
Conformity assessment bodies: cooperation and consistency
Amendments 12, 13, and 14 amend Article 28 of the AI Act concerning notified bodies.
Amendment 13 adds an obligation of cooperation between the notification authorities designated under the AI Act and those designated under the sectoral harmonization legislation in Annex I, “in particular to avoid an inconsistent or divergent interpretation of Union law.” The Commission had provided for a single assessment and designation procedure but had not explicitly stated an obligation of substantive cooperation.
Amendment 14 introduces a new paragraph 8a that resolves a practical issue: the notification authority already designated under sectoral legislation is also the competent authority for the single procedure, unless the Member State designates otherwise. That is a default rule that avoids uncertainty during the transition phase.
Cybersecurity: presumption of compliance with the Cyber Resilience Act
Amendment 15 introduces a provision that is not reflected in the Commission’s proposal: a new paragraph 2a in Article 42 of the AI Act establishing a presumption of compliance with the cybersecurity requirements of Article 15 for AI systems that comply with the essential requirements of Regulation (EU) 2024/2847 (Cyber Resilience Act — CRA), to the extent that those requirements are covered by the EU declaration of conformity issued under the CRA.
This provision eliminates the duplication of cybersecurity requirements between the AI Act and the CRA — one of the most frequently reported critical issues by industry operators.
Uniform application and consistency of enforcement
The first two amendments to the Draft Report (Amendments 1 and 2) concern recital 3 and introduce general principles that guide the entire text.
Amendment 1 adds two adjectives to the description of the objective of the amendments: the changes must ensure not only effective but also “simple and uniform” application of the relevant rules. The reference to uniformity is a message to Member States and the Commission: simplification must not result in fragmentation of application.
Amendment 2 introduces a new recital 3a committing the Commission, the AI Office, and the competent authorities of the Member States to avoid “overlaps, inconsistent interpretations, or divergent enforcement” between sectoral legislation, national legislation, and the AI Act, to enable innovation in AI in both the private and public sectors. That is a direct response to one of the most widespread concerns among operators: the risk of uncoordinated enforcement between different authorities.
Codes of practice on transparency: competence of the Commission
Amendment 16, which is apparently technical, corrects a point in Article 50(7): the competence to promote and facilitate the drafting of EU-level codes of practice for the labeling and detection of artificially generated content is transferred from the AI Office to the Commission. The Commission also retains the power to adopt implementing acts if it considers the code to be inadequate.
Scientific panel: retention of reimbursement power
Amendment 21 is a direct rejection: the Commission proposed deleting paragraph 3 of Article 69, which gives the Commission the power to adopt an implementing act on the reimbursement of experts from the scientific panel consulted by Member States. The Parliament retains this provision, considering it clearly necessary to ensure the panel’s functioning.
Parties consulted by the rapporteurs
The annex to the Draft Report reveals the discussions that informed the rapporteurs’ position: AI Sweden, Allied for Startups, Applia, Mistral, and Coimisiún na Meán (the Irish media regulatory authority). The presence of both industry stakeholders and a national regulatory authority reflects the dual sensitivity of the text: attention to feasibility for businesses and attention to consistency of enforcement.
An overall assessment
Draft Report PE782.530 is not a radical rewriting of the Commission’s proposal. Of the 33 paragraphs of amendments contained in Article 1 of COM(2025) 836, Parliament amends a targeted part, leaving the underlying structure unchanged—the extension of the simplified regime to SMEs, changes to conformity assessment, and the expansion of sandboxes. But where it does intervene, it does so with surgical precision and a consistent direction.
The Parliament’s amendments respond to three principles: regulatory predictability (fixed, unconditional deadlines), the protection of fundamental rights (strict necessity standards for sensitive data, involvement of data protection authorities in sandboxes, exclusion of the EDPS from the competence of the AI Office), and consistency of the legal system (presumption of compliance with the CRA, cooperation between notifying authorities, obligation of uniformity in enforcement).
The text of the co-rapporteurs will now have to address the amendments of the political groups. The opinion of the JURI committee (rapporteur Lagodinsky), which includes the proposal to add an explicit ban on non-consensual nudification among the practices prohibited by Article 5 of the AI Act, will be voted on February 24. The coming weeks will tell whether Parliament will maintain this balance between simplification and protection, or whether political pressure for rapid adoption — driven by the original August 2, 2026 deadline — will push for more significant compromises.
Previous articles in the Digital Omnibus series
- The EU Digital Omnibus: Europe’s Bold Move to Simplify Digital Regulation — Overview of the package and analysis of the call for evidence (September 2025)
- Digital Omnibus on AI: European Commission Proposes Simplifications to the AI Act — Analysis of proposal COM(2025) 836 (November 2025)
- Digital Omnibus: Cookies, GDPR and AI Training - New European Privacy Rules — Changes to the GDPR and cookie rules (November 2025)
Reference documents
- Draft Report IMCO-LIBE, PE782.530v01-00, 5.2.2026: CJ40-PR-782530_EN.pdf
- Commission proposal, COM(2025) 836 final, 19.11.2025: COM_COM(2025)0836_EN.pdf
- Legislative procedure: 2025/0359(COD)
- Draft Opinion JURI, PE784.179, rel. Lagodinsky: JURI-PA-784179_EN.pdf
- EDPB-EDPS Joint Opinion 1/2026: link
Related Hashtag
#AIAct #DigitalOmnibus #EuropeanParliament #ArtificialIntelligence #EURegulation #AICompliance #GDPR #DataProtection #BiasDetection #CyberResilienceAct #AIOffice #RegulatorySimplification #HighRiskAI #AILiteracy #PrivacyByDesign #LegalTech #AIGovernance #IMCO #LIBE #TechRegulation
