AI Act: August 2, 2025 Marks a Decisive Turning Point for Artificial Intelligence in Europe
The countdown is almost over. In three days, on August 2, 2025, the second phase of the European AI Act will come into force, bringing with it a package of regulations that will redefine the artificial intelligence landscape in the Old Continent. If you thought the first provisions that came into effect in February were just an appetizer, get ready for the main course.
As we write these lines, the final pieces of the regulatory puzzle are falling into place with developments that will make this August even more significant than expected.
A Strategic Turning Point
Following the application of the first fundamental regulations last February, which banned the most hazardous AI practices and established general principles, August will mark the beginning of proper operational regulation. We’re not talking about marginal changes: 28.3% of all AI Act provisions will be operational by that date, creating a regulatory framework that companies can no longer ignore.
AI Act Implementation Timeline
🔍 Additional Notes
- Article 101 (Financial penalties for providers of general-purpose AI models) in Chapter XII does not come into effect on 02/08/2025
- 02/08/2026 represents the date of full implementation of all AI Act provisions
- Article 6, paragraph 1 in Chapter III has a postponed implementation date to 02/08/2027
- Implementation percentage: From 02/08/2025, 28.3% of the AI Act provisions will apply (32 articles out of 113 total)
The Four Key Areas That Will Change Everything
1. Governance and Supervision: The Birth of the Control Ecosystem
Chapter VII introduces the European governance structure that will oversee the entire sector. That isn’t just bureaucracy: we’re witnessing the birth of a coordinated control system that will have the power to influence AI development globally. National authorities will need to coordinate with Brussels, establishing an unprecedented supervisory network.
The practical effect? Companies will have to deal with clear institutional interlocutors and standardized processes, but also with more stringent and coordinated controls.
2. General-Purpose AI Models: The Giant’s Rule (Now with Concrete Roadmap)
Chapter V (articles 51-56) represents perhaps the most significant innovation for the current market, and recent weeks have clarified exactly how it will work in practice.
The Code of Conduct is Reality
On July 10, 2025, the European Commission officially received the Code of Conduct for GPAI models, developed by 13 independent experts with over 1,000 stakeholders. That is no longer a hypothesis: the European AI Office is already inviting providers to sign this voluntary framework, which will become operational in three days. (see our article here)
What Changes Concretely:
- Simplified compliance path for AI model providers
- Greater legal certainty and reduction of administrative burdens
- Commission focus on code adherence rather than other enforcement methods
The Transparency Template is Final
On July 24, 2025, just six days ago, the Commission published the final template for training data transparency following a consultation that garnered over 430 responses. Now we know exactly what providers will have to declare (see our article here):
- General information: Model identification, modalities, training data sizes
- Data sources: Public datasets, licensed content, web-scraped data (top 10% of domains)
- Legal compliance: Copyright respect and illegal content removal measures
Crystal Clear Timeline:
- August 2, 2025: Rules come into effect for new models
- August 2, 2026: Start of EU enforcement
- August 2, 2027: Compliance deadline for existing models
3. Control Infrastructure: Notifying Authorities and Notified Bodies
Section 4 of Chapter III (articles 28-39) creates the backbone of the control system for high-risk systems. We’re talking about the birth of a true European “FDA” for artificial intelligence, with:
- Notifying authorities that will accredit evaluation bodies
- Notified bodies that will certify AI system compliance
- Standardized procedures for conformity assessment
Why is it crucial? Because it establishes who can say “yes” or “no” to high-risk AI systems in Europe. Enormous power that will shape the future of the sector.
The Fundamental Building Block: Harmonized Standards
However, there’s a crucial element that is being defined precisely these days and that will complete this regulatory puzzle. On June 23, 2025, little over a month ago, the European Commission adopted a new Implementing Decision (C(2025)3871) that updates the previous standardization request, setting a deadline of August 31, 2025 for CEN and CENELEC to complete harmonized standards supporting high-risk AI systems.
This decision is particularly significant because:
- Completely replaces the previous 2023 decision, aligning with the final AI Act text
- Accelerates timelines: from April 2025 to the end of August 2025
- Introduces harmonized standards that will give presumption of conformity to AI systems that comply with them
(Source: Implementing Decision C(2025)3871)
These standards will cover the most critical technical aspects of the AI Act, and without them, the entire control architecture would remain incomplete. As I explore in detail in my book “Artificial Intelligence, Neural Networks and Privacy: Striking a Balance between Innovation, Knowledge, and Ethics in the Digital Age”, technical standardization represents the indispensable bridge between regulatory principles and practical implementation.
4. Sanctions: When Compliance Becomes Mandatory
Chapter XII introduces the sanctioning regime (except article 101, which will be applied later). Fines can reach up to 7% of global annual turnover for the most serious violations. We’re not talking about symbolic fines: these are figures that can bring even the most solid multinationals to their knees.
The Latest Developments That Change Everything
European Parliament Support
On July 16, 2025, the co-chairs of the European Parliament’s AI Act Working Group, Brando Benifei (S&D, IT) and Michael McNamara (Renew, IE), issued a joint statement defining the GPAI Code of Conduct as “a pragmatic and timely foundation” for managing systemic risks. A crucial political endorsement that legitimizes the chosen approach. (see our article here)
The Liability Framework Crisis
But not everything runs smoothly. On July 24, 2025, just yesterday, a European Parliament study revealed that the Commission is considering withdrawing the AI Liability Directive (AILD) in 2025. That could create a significant regulatory vacuum precisely as the AI Act comes into full effect. (see our article here)
Why it’s a problem:
- Legal uncertainty for companies operating across EU borders
- Inadequate protection of victims when AI systems cause harm
- Market fragmentation with each Member State developing its approach
Germany already has an autonomous driving law, and Italy is working on its AI bill. Europe has a few months to avoid irreversible market fragmentation.
The Cascade Effects That Are Emerging Now
Enhanced Brussels Effect
The AI Act won’t stop at European borders, and recent developments confirm this. Global companies seeking to operate in the EU market will need to comply with European rules, and the GPAI Code of Conduct now provides them with a clear path to do so. It’s the same phenomenon we saw with GDPR, but amplified and accelerated.
The Race to Sign
The European AI Office is already collecting adherence to the Code of Conduct through the application form that can be sent to EU-AIOFFICE-CODE-SIGNATURES@ec.europa.eu. Those who sign now will not only gain early compliance but also a significant reputational advantage.
The Regulatory Certainty Paradox
Paradoxically, more explicit rules could accelerate responsible innovation. The Code offers “greater predictability in regulatory expectations” and “reduction of administrative burdens” - precisely what companies were asking for.
What to Do Now: The Roadmap for the Next 72 Hours
For GPAI model providers:
- Immediate action: Contact EU-AIOFFICE-CODE-SIGNATURES@ec.europa.eu to sign the Code of Conduct
- Prepare the transparency template using the format published on July 24
- Identify whether your models fall within the thresholds for advanced models with systemic risk
For large tech companies:
- Start immediately mapping your AI models against Chapter V thresholds
- Monitor developments on the AI Liability Directive
- Initiate dialogues with future notifying authorities in your key markets
For innovative SMEs:
- Don’t underestimate the impact: even apparently simple systems could fall within the definitions
- Consider compliance as a strategic investment, not as a cost
- New: Consider voluntary adherence to the Code of Conduct as a signal of seriousness
For everyone:
- August 2, 2025, is the day after tomorrow: prepare for the paradigm shift
- Start building internal AI governance processes
- Invest in training: Compliance will be a key competency
- Monitor the evolution of the liability framework to avoid surprises
Towards a Regulated but Uncertain Future
August 2, 2025, will not mark the end of AI innovation, but rather the beginning of a new chapter where technology and responsibility walk hand in hand. Europe is betting that one can be a technological leader while remaining an ethical leader.
However, recent developments show that the road is not without obstacles. While the GPAI Code of Conduct represents a success of multi-stakeholder dialogue, the potential crisis of the liability framework reveals the complexities of such ambitious regulation.
The open questions:
- Will Europe manage to avoid AI liability market fragmentation?
- Will the Code of Conduct maintain its effectiveness beyond the “paper tiger” feared by Parliament?
- How will global markets react to this regulatory acceleration?
Companies that understand how to navigate this complexity - embracing the opportunities of the Code of Conduct while preparing for the uncertainties of the liability framework - will not only survive but also prosper in a market that will increasingly reward responsible and transparent innovation.
The future of European AI is being written these days. Literally. And you, are you ready to write it together?
This article is based on the analysis of the official AI Act implementation timeline and the latest regulatory developments of July 2025. For real-time updates on these rapidly evolving developments, continue to follow our blog.
Related Hashtags
#AIAct #ArtificialIntelligence #August2nd2025 #AIRegulation #EuropeanAI #AICompliance #DigitalEurope #ResponsibleAI #TechRegulation #AIGovernance #GeneralPurposeAI #GPAI