Introduction: Beyond the AI Act
In the first article of this series, we analyzed how the Digital Omnibus simplifies the AI Act with more flexible timelines and centralized governance. But that was only part of the story. On November 19, 2025, the European Commission presented a much more ambitious package that touches the very heart of European privacy: the GDPR and cookie rules.
While the AI Act modifications were generally welcomed as reasonable pragmatic adjustments, the proposals on privacy and personal data have triggered a storm. Digital rights organizations speak of “GDPR dismantling.” The Commission responds that this is “necessary evolution.” Who is right?
The stakes are high. This is not just about cookie banners or legal technicalities. It’s about deciding what privacy model we want for Europe’s digital future, at a historical moment when artificial intelligence has an insatiable hunger for data and Big Tech is pushing for deregulation.
The Promise: Freeing Us from “Consent Fatigue”
Let’s start with the problem we all know. Every time we visit a website, the infamous cookie banner appears. We click “Accept All” without reading, just to make it disappear. We repeat this dozens of times a day. This is the “consent fatigue” that the Commission wants to solve.
The current system stems from Article 5(3) of the ePrivacy Directive, which requires explicit consent before installing non-essential cookies on the user’s device. A sacred principle on paper, but one that in practice has created a paradoxical ecosystem: users are bombarded with consent requests but click mechanically without truly understanding what they’re authorizing. Companies spend millions on increasingly sophisticated (and irritating) consent management systems. And privacy? It’s hard to say if it’s really better protected.
The Commission has done the math. Compliance burdens cost European businesses billions. Users are frustrated. The effectiveness of protection is questionable. According to Brussels, the system works for no one: neither for those who must comply with it, nor for those who should be protected by it.
But is the proposed solution the right one? This is where the problems begin.
The New Article 88a: Moving Furniture While the Ceiling Collapses
The most significant structural change consists of moving cookie regulation from the ePrivacy Directive to the GDPR, through a new Article 88a. At first glance, this seems like a rational move toward regulatory consolidation. Instead of having two overlapping frameworks, create a single one. Greater coherence, uniform application through national GDPR authorities, end of interpretative fragmentation.
The devil, as always, is in the details. This “Article 88a” will regulate all “processing of personal data on and from terminal equipment”, meaning cookies, fingerprinting, tracking of all kinds. But while moving the regulation, the rules of the game are also being changed. And here the controversies arise.
When Cookies No Longer Need Consent
Currently, cookies can be used without consent only for purposes strictly necessary to the site’s functioning. The classic example is the e-commerce cart or session management. Everything else requires the user’s explicit consent.
The Digital Omnibus proposes to significantly widen the exceptions. Security and fraud protection would no longer require consent. Sensible, you might say. Who would oppose security measures? But the point is: who defines what falls under “security”? And here the vagueness of the formulation opens disturbing scenarios.
Even more controversial is the exception for “audience measurement”, i.e., website traffic statistics. Google Analytics without consent banner? Technically, according to the proposal, it could become possible if the data remains aggregated and is not shared with third parties. But how do you verify that the data truly remains internal? And what prevents creative definitions of “audience measurement” from legitimizing practices that are prohibited today?
The third exception concerns “services explicitly requested by the user.” Here too, the logic is apparently reasonable. If I actively request a functionality, it’s normal for the site to use cookies to provide it to me. But in digital practice, almost everything can be presented as a “requested service.” Content personalization? Experience optimization? Where do you draw the line?
Privacy organizations fear that these exceptions, in practical application, will become the rule rather than the exception. A disguised return to the pre-GDPR world, where tracking was pervasive and consent was an illusion.
The Technological Utopia of “Automated Signals”
Faced with criticism about cookies without consent, the Commission presents its trump card: “automated consent signals”. The idea is fascinating. Instead of clicking on repetitive banners, I would configure my privacy preferences once and for all in my browser or a dedicated tool. These preferences would be automatically transmitted to every site I visit, in a standardized machine-readable format. Websites would be obliged to respect these signals. No more banners, smooth experience, more informed consent because expressed calmly and not under pressure.
On paper, it’s beautiful. In practice, the questions are numerous and concerning.
Who will develop these technical standards? Technical standardization is notoriously slow and subject to conflicts of interest. It will take years to arrive at shared specifications. In the meantime, what happens? And when the standards arrive, will they really be respected? How do you ensure that websites don’t simply ignore the signals? Enforcement is already difficult today with visible banners; it will become invisible with automatic signals that users don’t see.
Then there’s the granularity problem. A binary signal (accept all / refuse all) is too simple and doesn’t reflect users’ real preferences, who might want to accept some cookies and refuse others. But if we create overly detailed categories, we fall back into the same complexity we wanted to eliminate.
And adoption? How many users will actively configure these preferences? Technology history teaches that advanced privacy settings are used only by a conscious minority. The majority will remain with default settings. If the default is “accept,” we’ve created a presumed consent system even more pervasive than the current one.
AI Training: Does Legitimate Interest Become a Blank Check?
But the most explosive modification, the one that triggered red alerts in digital rights organizations, concerns artificial intelligence. The Digital Omnibus establishes that the development and operation of AI systems constitute a “legitimate interest” of the controller under Article 6(1)(f) of the GDPR.
To understand the scope of this modification, we need to take a step back. The GDPR provides various legal bases for processing personal data. Explicit consent is one of them, but not the only one. There’s also “legitimate interest,” which allows processing if the controller’s interest outweighs the data subject’s rights. Currently, this balancing must be done case by case, documented, and the burden of proof lies with the data processor.
The Commission’s proposal reverses this logic. Legitimate interest for AI training is presumed. There’s no longer a need to demonstrate it each time. The burden shifts: now it’s the data subject who must demonstrate that their rights prevail over the controller’s interest in using the data to train artificial intelligence models.
The implications are enormous. Virtually any use of personal data could be justified as “AI training” or “AI operation.” Large-scale web scraping to build datasets? Legitimate interest. Using private conversations to train chatbots? Legitimate interest. Behavioral profiling to improve recommendation algorithms? Legitimate interest.
Max Schrems, founder of noyb, minces no words: “Digital Omnibus: EU Commission wants to wreck core GDPR principles”. European Digital Rights (EDRi) adds: “Digital Omnibus – Deregulation instead of simplification”.
The Commission responds that all safeguards remain: the obligation of impact assessment (DPIA), data minimization principles, appropriate technical measures, and especially data subjects’ rights (access, objection, deletion) continue to apply. But critics retort that in practice, exercising these rights becomes much more difficult when processing is based on presumed legitimate interest rather than explicit consent.
There’s also an aspect of coordination with the AI Act, analyzed in the first article. Article 4a of the AI Act allows processing of sensitive data for bias mitigation. Now the Digital Omnibus adds legitimate interest for AI training with common data. Together, these two norms create a very permissive framework for data use in artificial intelligence. Too permissive, according to those who fear that Europe is yielding to Big Tech lobbying pressures.
Pseudonymization: When Personal Data Is No Longer Personal
Another controversial modification concerns pseudonymization and the very definition of “personal data”. The Digital Omnibus codifies a recent European Court of Justice ruling and takes it to extreme consequences.
The proposal is this: pseudonymized datasets can be shared with third parties without this constituting a transfer of personal data, if the third party recipient does not have the means to re-identify individuals. In practice, a pseudonymous ID like “XYZ123” would not be considered personal data for those receiving the dataset who don’t have the mapping table that connects XYZ123 to John Doe.
This introduces a “subjective approach” to the definition of personal data. Currently, the GDPR defines personal data objectively: it’s personal data if it relates to an identified or identifiable person. With the new formulation, data becomes “personal” only for those who have a reasonable possibility of identifying the individual.
The potential impact is devastating for data protection. Sectors that systematically operate with pseudonyms or random IDs, such as data brokers and the programmatic advertising industry, could find themselves outside the scope of the GDPR. If the data is not “personal” for them, they have no GDPR obligations. No consent, no information notice, no data subject rights.
Re-identification techniques, moreover, evolve rapidly. Datasets that today seem impossible to de-anonymize could tomorrow be reconnected to real people by combining them with other sources. Who protects data subjects in this scenario? If the dataset recipient is not covered by the GDPR, we’re in a regulatory limbo.
noyb intervened with an “Open letter: Digital omnibus brings deregulation, not simplification”. It’s not a single change that dismantles the GDPR, but the accumulation of apparently technical modifications that, taken together, hollow out protection from within.
Coordination with the AI Act: Two Systems, One Confusion
The GDPR modifications of the Digital Omnibus must be read together with the AI Act modifications analyzed in the first article. A two-tier system is created that, on paper, should be complementary but risks generating overlaps and conflicts.
For AI training with common data, use the presumed legitimate interest from the GDPR. For AI training with sensitive data aimed at bias mitigation, use Article 4a of the AI Act. For pseudonymized datasets, apply the new sharing rules. For high-risk AI systems, add up the requirements of the AI Act and the GDPR.
Two governance authorities: national GDPR authorities and the European AI Office. Two impact assessment systems: the GDPR’s DPIA and the AI Act’s assessments. Two sets of guidelines, two enforcement mechanisms, two possibilities of sanctions.
The Commission has promised joint guidelines between the European Data Protection Board and the AI Office. But the details of this coordination remain vague. What happens if a national GDPR authority considers that processing for AI training is not justified as legitimate interest, while the AI Office considers it compliant with the AI Act? Who prevails? How do you avoid forum shopping, with companies seeking the most favorable interpretation?
The Reactions: A Polarized Debate
The Digital Omnibus has split the European digital world into two opposing camps.
On one side, the Commission and businesses that see these modifications as necessary evolution.
Phil Brunkard of Info-Tech Research Group represents the position of many in the enterprise IT sector: “this is mostly positive news, as simplified GDPR rules, a unified cybersecurity reporting portal, and the European Business Wallet all point to less administrative friction and faster compliance cycles. Even if just part of the projected €5 billion (about $5.8 billion) in savings by 2029 materializes, it will free up budgets for innovation instead of paperwork”.
On the other side, a compact front of digital rights organizations that see the Digital Omnibus as a betrayal of the GDPR’s founding principles. European Digital Rights wrote an open letter to Commissioner Virkkunen signed by three major NGOs: “However, the legislative changes now contemplated go far beyond mere simplification. They would deregulate core elements of the GDPR, the e-Privacy framework and AI Act, significantly reducing established protections”.
The concerns regard not only the merit of the modifications, but also the method. The legislative process was accelerated. Public consultations spoke generically about “cookie banners,” but the final proposal touches fundamental pillars of the GDPR. Thorough impact assessments on fundamental rights are lacking. Privacy organizations denounce Big Tech lobbying pressures behind the scenes.
In the middle, figures like Sanchit Vir Gogia of Greyhound Research offer a more nuanced but no less critical reading, stating that the reform: “may appear to reduce complexity for enterprise IT, but in truth, it simply reassigns that complexity inward”.
What It Means in Practice
For publishers and web platforms, the modifications open unprecedented scenarios. If the exceptions for audience measurement pass in their current form, one could return to using analytics without consent banners. But it’s a risky bet. Interpretations of the new rules will be divergent in the early years, and enforcement during the transitional period is unknown. Investing heavily in “consent-free” systems could prove premature if national authorities then interpret the exceptions restrictively.
AI developers, on the other hand, see prairies opening up. The presumed legitimate interest for AI training greatly simplifies access to data. But beware: impact assessments remain mandatory, as does documentation of legitimate interest (even if presumed, it must still be justified), and transparency toward data subjects. And most importantly, rights of objection, access, and deletion remain fully applicable. A user can always object to the processing of their data for AI training, and at that point the company must demonstrate that its interest is truly prevalent. It’s not the blank check some fear, nor the free pass others hope for.
For Data Protection Officers, complicated years are in prospect. Navigating the new framework will require a thorough revision of internal procedures, massive team training, and especially the ability to interpret rules that will inevitably be ambiguous in the early stages. Coordination between the AI Act and the modified GDPR will add further complexity.
And citizens? The user experience could effectively improve if automated signals work. Configuring preferences once and not seeing banners anymore would be real progress. But the price could be less transparency about who uses our data and for what. Formal rights remain, but exercising them becomes more difficult when there’s no longer the moment of explicit consent when we’re explained what we’re authorizing.
The Questions That Remain Open
The constitutionality of the modifications with respect to Article 8 of the EU Charter of Fundamental Rights is far from certain. The European Court of Justice has repeatedly demonstrated, with precedents like the Data Retention Directive and Privacy Shield, that it doesn’t hesitate to strike down norms it considers harmful to fundamental rights. The subjective redefinition of “personal data” and the presumed legitimate interest for AI training could be challenged after adoption. Years of litigation in prospect.
Technical standards for automated signals will require time, perhaps years. Who will develop them? W3C, IETF, European standardization bodies? With what governance? And especially, with what enforcement? The history of voluntary web standards teaches that adoption is slow and partial. Will browsers natively implement these functionalities? Will sites really respect them? How will it be verified?
The transitional period will be chaotic. The estimated timeline speaks of final adoption by mid-2026, entry into force in summer 2026, but full application probably only from 2028 when technical standards are ready. Two years of legal uncertainty in which it won’t be clear which regime applies. Do existing contracts remain valid? Are cookie banners still mandatory? Can companies already invoke legitimate interest for AI training?
And then there’s the specific sectoral impact. Adtech and programmatic advertising could be revolutionized by audience measurement without consent, but also heavily hit if interpretations are restrictive. Healthcare will have to coordinate the new rules with already complex sectoral regulations. Academic research could benefit from facilitations on pseudonymized datasets but will have to deal with ethical norms governing research on human subjects.
Scenarios for the Future
Let’s try to imagine three possible scenarios for the coming years.
Optimistic scenario: Automated signals work magnificently. Robust and interoperable technical standards are developed rapidly. Browsers integrate them natively, users actively configure preferences, sites respect them. Enforcement is effective, GDPR authorities prevent abuse of legitimate interest for AI training, joint guidelines with the AI Office clarify every ambiguity. A new balance is reached between innovation and rights protection. Europe maintains global leadership in digital rights but with a more workable system. Win-win.
Pessimistic scenario: Automated signals fail due to technical complexity and low adoption. Exceptions on cookies become the norm, effectively returning to pre-GDPR. AI training as legitimate interest legitimizes wild web scraping and indiscriminate data use. The subjective redefinition of personal data empties the GDPR of its meaning. Enforcement is weak because authorities are overwhelmed and priorities lie elsewhere. Europe loses its leadership in privacy and digital rights. Lose-lose.
Realistic scenario: A confused mix of partial successes and problems. Automated signals are adopted but only by a conscious minority. Abuse emerges on legitimate interest for AI training, litigation follows, gradually case law clarifies the limits. The 2026-2028 period is chaotic and full of uncertainties. Then, slowly, the system stabilizes on a new balance that is different both from the original GDPR and from the intentions of the initial proposal. Compromise.
Which scenario materializes depends on the political will for enforcement. If GDPR authorities and the AI Office rigorously apply the provided safeguards, the system can work. If a lax approach prevails, we slide toward de facto deregulation. The GDPR is what the authorities that apply it make of it, not just what’s written in legislative texts.
A Comparative Look
It’s worth looking at what happens elsewhere. In the United States, a fragmented sectoral approach prevails, with different state laws (California CPRA, Virginia CDPA) but no comprehensive federal regulation. The regime is generally more permissive on cookies and tracking, with prevalence of opt-out instead of opt-in. If the Digital Omnibus passes, Europe partially approaches the American model while maintaining a more protective framework.
China has the Personal Information Protection Law (PIPL), inspired by the GDPR but with centralized and heavily politicized enforcement. AI is heavily regulated but from a state control perspective. The Digital Omnibus certainly doesn’t bring Europe closer to the Chinese model.
Interesting instead is the comparison with post-Brexit United Kingdom. The UK GDPR is incrementally diverging from the European GDPR, with focus on “data adequacy” without excessively strangling business. The pragmatic British approach is surprisingly similar to the spirit of the Digital Omnibus. The two regulations could reconverge on the practical level even if they remain formally separate.
A Historic Crossroads
We’re at a crossroads. The Digital Omnibus represents much more than a collection of technical modifications. It’s a fundamental choice about the type of digital society we want to build.
The question is not whether the GDPR should evolve. Certainly it must. No regulation is perfect, especially in a rapidly changing field like digital. The question is: in which direction? Toward a model that maintains the centrality of fundamental rights by pragmatically adapting protection methods? Or toward a model that, behind the rhetoric of simplification, substantially lowers the level of protection?
The Commission claims it’s the first option. Indeed, it states “This initiative opens opportunities for European companies to grow and to stay at the forefront of technology while at the same time promoting Europe’s highest standards of fundamental rights, data protection, safety and fairness”.
The truth is probably somewhere in between, but the risk of drift is real.
Three final considerations:
The text that will eventually be adopted will be different from the current proposal. The European Parliament and the Council will modify the most controversial parts. Civil society pressure and public debate matter. It’s not set in stone.
Even if it passed in its current form, much would depend on implementation. Guidelines, technical standards, and especially enforcement by authorities will determine the real impact. A permissive text applied rigorously is different from a permissive text applied weakly.
This affair demonstrates how fragile rights protection is in the digital age. What was won through years of battles can be eroded in a few months if public attention wanes. Privacy defense is a continuous process, not a result acquired once and for all.
The Digital Omnibus, together with the AI Act modifications analyzed in the first article, redesigns the European regulatory landscape for the artificial intelligence era. Will it be a positive evolution or a worrying drift? The answer depends on us: on how vigilant we remain, on how actively we participate in the debate, on how much we make our voice heard to political representatives.
The coming months will be decisive. The debate is open.
Digital Omnibus Package Series:
First article: AI Act - Simplifications and New Rules
Second article: Cookies, GDPR and Privacy (this article)
References
- Digital Package - European Commission
- IP/25/2718 - Commission Press Release
- noyb: Digital Omnibus wants to wreck core GDPR principles
- EDRi Statement on Digital Omnibus
- Gibson Dunn: Digital Omnibus - First Look
Disclaimer: This analysis is for informational purposes only and does not constitute legal advice. For specific assessments of your organization’s compliance, it is advisable to consult specialized professionals.
Related Hashtag
#DigitalOmnibus #GDPR #Privacy #Cookie #ePrivacy #AITraining #DataProtection #LegitimateInterest #ConsentFatigue #EURegulation #DigitalRights #FundamentalRights
