On 9 April 2026, exactly one year after presenting the AI Continent Action Plan, the European Commission published a progress update on the milestones achieved. The institutional message is unambiguous: progress has been made across all five pillars — infrastructure, data, talent, adoption, and trustworthy AI. Among the milestones claimed are 19 operational AI Factories deployed across European supercomputers, 13 regional AI Factory antennas, the launch of the Data Union Strategy, the AI Omnibus for regulatory simplification, the AI Skills Academy, the Apply AI Strategy for strategic sectors, and the announcement of the European AI Innovation Month, scheduled from 14 October to 17 November 2026.
The quantitative record is undeniable. But the fundamental question is not how many initiatives have been launched — it is whether Europe is building an AI continent capable of holding together computational power, data access, and effective safeguards for individuals. And on that point, the assessment deserves a closer look.
Infrastructure and Data: Real Progress, Autonomy Still Pending
The expansion of AI Factories and the prospect of AI Gigafactories — backed by €20 billion through InvestAI — represent a significant infrastructural leap. The stated goal is to provide European researchers, startups, and SMEs with the computational capacity needed to train and develop globally competitive AI models.
At the same time, the Data Union Strategy aims to unlock the potential of data access and sharing on a continental scale—an essential condition for building high-quality models.
However, computational capacity and data access are necessary but not sufficient conditions for genuine European strategic autonomy. Dependence on non-EU supply chains for chips, cloud infrastructure, and foundational platforms remains a structural challenge. Building European AI factories is essential; ensuring they operate on technological foundations that Europe can effectively control is equally so.
The Real Benchmark: Trustworthy AI
In today’s communication, the Commission reaffirms its commitment to AI that is trustworthy, secure and aligned with democratic values. The reference is appropriate, but in the first-year assessment, it remains largely programmatic and concise, without substantively addressing the relationship between industrial acceleration and operational safeguards.
That is the critical point. The concept of trustworthy AI cannot be reduced to an infrastructural slogan or treated as one item among many in a list of pillars. It must translate into concrete requirements: system governance, accountability across the value chain, transparency in automated decision-making, and — above all — effective human oversight.
On this last point, human oversight should not be read solely as a technical requirement confined to high-risk systems, but as an interpretive criterion and regulatory value that more broadly orients the European model of AI governance. The AI Act expressly provides for it as one of the requirements for high-risk systems. Still, its systemic significance transcends that specific category: it is a principle that informs the entire regulatory framework and expresses its underlying logic.
The risk today is one of imbalance: a robust and visible industrial acceleration, set against operational safeguards that remain largely to be built and made effective in practice.
Regulatory Simplification and the Risk of Weakening Safeguards
Among the Commission’s milestones, the AI Omnibus holds a central position. Its stated objective is to provide businesses with legal certainty and reduce compliance costs, in line with the simplification agenda of the Digital Omnibus Package presented on 19 November 2025.
The position of European data protection authorities warrants close attention. In Joint Opinion 1/2026 of 20 January 2026, the EDPB and EDPS took a clear stance on the Digital Omnibus on AI proposal. The message is nuanced: support for the general objective of simplifying AI Act implementation, but with precise demands for stronger safeguards.
Specifically, the EDPB and EDPS raised concerns about the postponement of deadlines for high-risk AI systems, the lowering of the strict necessity standard for processing special categories of personal data for bias detection purposes, the deletion of the registration obligation for Annex III systems self-assessed as non-high-risk by providers, and the proposal to convert the AI literacy obligation from a duty on providers and deployers to a mere public encouragement mechanism.
In the subsequent Joint Opinion 2/2026 of 10 February 2026 — addressing the Digital Omnibus as a whole, including proposed amendments to the GDPR — the authorities reiterated the same position: simplification is legitimate and in some cases necessary, but it must not result in a restrictive redefinition of the safeguards provided by data protection law.
The principle is clear: simplification without deregulation. The complementarity between the AI Act and the GDPR is one of the cornerstones of the European model and must be preserved, not sacrificed to competitiveness.
The Italian Case: An Early National Test
Against this backdrop, Italy already offers initial signals that help gauge the practical resilience of the European paradigm, both in legal interpretation and in practical implementation.
Law No. 132 of 23 September 2025 introduced a national framework complementary to the AI Act in the Italian context. Early regulatory and judicial developments suggest that the real test will not be infrastructural innovation alone, but the operational resilience of safeguards — in terms of transparency, human oversight, and fundamental rights protection — in the concrete functioning of AI systems adopted across the public and private sectors.
The Italian experience, while still in its early stages, represents a significant testing ground for assessing whether the European model can withstand the transition from law to practice.
Conclusion: The Credibility of the European Model Is at Stake
One year on, the AI Continent Action Plan shows real progress on infrastructure and organization. Europe is investing in computational capacity, data access, talent development, and AI adoption in strategic sectors. These are matters of fact.
But the political and legal distinctiveness of the European model is not measured by the power of its supercomputers. It is measured by its ability to operationalize the Trustworthy AI pillar: effective governance, real accountability, concrete human oversight, and substantive protection of fundamental rights. On this front, there is still a long way to go.
The positions expressed by the EDPB and EDPS in their Joint Opinions of January and February 2026 make it clear that simplification and rights protection are not competing objectives — they must advance together. The upcoming European AI Innovation Month, from 14 October to 17 November 2026, will be an important occasion to assess whether equally robust safeguards for individuals will match infrastructural promises.
The real question remains open: does Europe want to be an AI continent, or a trustworthy AI continent? The answer is not yet written.
