Just one day before the second phase of the AI Act took effect, the European Commission issued its definitive verdict: the General-Purpose AI Code of Practice is “adequate.” But what does this approval mean for the future of artificial intelligence in Europe?
Exactly 24 hours before the new AI Act rules came into force, the European Commission published what many consider the most essential document for the future of AI in Europe: Opinion C(2025) 5361 of August 1 2025, on the assessment of the General-Purpose AI Code of Practice.
As we anticipated in our July 30 article, August 2, 2025, was set to mark a decisive turning point. Now we know precisely how.
The Commission’s Verdict: Green Light with Reservations
The Commission concludes that “the General-Purpose AI Code of Practice adequately covers the obligations provided for in Articles 53 and 55 of the AI Act and meets the aims according to Article 56.” A full approval, but one that conceals essential complexities.
What “Adequate” Means in Practice
The 67-page document meticulously analyzes every aspect of the Code, divided into three fundamental chapters:
1. Transparency Chapter
- Covers obligations under Article 53(1) points (a) and (b)
- Includes the “Model Documentation Form” for documenting models
- Specifies how to keep model information up-to-date
2. Copyright Chapter
- Implements Article 53(1) point (c) on copyright compliance
- Requires policies to identify content with rights reservations
- Includes complaint mechanisms for rights holders
3. Safety and Security Chapter
- 10 specific commitments for systemic risk models
- Framework for risk assessment and mitigation
- Reporting requirements to the European AI Office
The Twelve Commitments That Change the Game
The Code establishes 12 total commitments, strategically divided:
- 2 commitments for all general-purpose AI model providers
- 10 additional commitments for systemic risk models
But the real breakthrough is in the structure: these aren’t isolated rules, but an integrated system where each element feeds into the others.
The Iterative Process Nobody Saw Coming
One of the most innovative aspects of the Code is the “iterative and recursive” approach to systemic risk assessment:
- Risk identification (Commitment 2)
- Risk analysis (Commitment 3)
- Acceptability determination (Commitment 4)
- Mitigation implementation (Commitments 5-6)
This process must be conducted before market release and then continuously throughout the model’s lifecycle.
Three Game-Changing Innovations
1. The Flexibility of “Similarly Safe or Safer Models”
The Code introduces the concept of “similarly safe or safer models” (Appendix 2), allowing for lighter measures for models that demonstrate safety characteristics comparable to those of already-evaluated models.
2. The Role of Independent Evaluators
Appendix 3.5 specifies when independent external evaluations are required, creating a new market for specialized certifiers.
3. Multi-Stakeholder Governance
The Code was developed with over 1,000 participants in a process involving providers, authorities, civil society, academics, and independent experts.
Critical Points the Commission Doesn’t Hide
Despite approval, the Commission highlights areas for improvement:
Lack of Specific KPIs
The document acknowledges that “the Code does not contain key performance indicators” to measure implementation, but considers this gap “currently not appropriate” given the sector’s nascent state.
The Transparency Template Knot
For Article 53(1) point (d) on training data summaries, the Code defers to the AI Office template, acknowledging “diverging views among stakeholders” on the required level of detail.
The Copyright Question
The Commission is clear: “the assessment only concerns the adequacy of the Code as a means to demonstrate compliance with Article 53(1)(c) and does not affect the application of EU copyright law.”
The Monitoring System Nobody Expected
The real innovation is in the continuous review system:
- Regular monitoring by the AI Office and Board
- Formal updates at least every two years
- Rapid guidance in case of imminent threats
- Cooperation with national authorities and downstream providers
Four Emergency Scenarios
The document identifies situations that could require rapid updates:
- Materialization of systemic risks
- Discovery of new attack vectors
- Significant changes in deployment contexts
- Development of breakthrough capabilities
What It Means for Italian Companies
For Tech Giants
- Immediate compliance: Those who haven’t signed yet must move now
- Investment in governance: Internal frameworks become crucial
- Reporting preparation: Monitoring procedures must be operational
For AI Scale-ups
- Threshold assessment: Many models might fall under the requirements
- Strategic partnerships: Access to independent evaluators becomes competitive
- First-mover advantage: Early adopters gain market credibility
For the Italian Ecosystem
The Code creates unexpected opportunities:
- New market for AI evaluation services
- Growing demand for AI governance expertise
- Competitive advantage for compliance-ready companies
Three Signals We Must Monitor
- Adoption rate in the next 30 days - Market leadership indicator
- First interpretations by national authorities - Could create divergences
- Non-EU models’ reaction - Test of the Brussels Effect in AI
The Timeline That Changes Everyone’s Plans
August 2025: Code operational, collecting signatures
December 2025: First compliance assessments
August 2026: Full enforcement begins
August 2027: Deadline for existing models
Toward Regulated AI: Opportunity or Brake?
The Commission’s approval of the Code marks the end of the AI Act’s experimental phase. Now begins the real test: transforming regulatory principles into competitive advantage.
Companies that view compliance as an innovation accelerator—not an obstacle—will be the leaders of the next decade. The European market already rewards transparency, security, and ethics. The GPAI Code simply crystallizes this evolution.
The future of European AI is no longer being written. It’s being implemented.
This article analyzes European Commission Opinion C(2025) 5361 of August 1, 2025. For complete documentation and regulatory updates, please continue to follow our blog.
Related hashtags:
#AIAct #GPAI #EuropeanCommission #AIGovernance #ArtificialIntelligence #Compliance #TechRegulation #ResponsibleAI