Today’s Breakthrough: First Draft of the Code Now Available

December 17, 2025 - The European Commission published today the first draft of the Code of Practice on marking and labelling of AI-generated content. This long-awaited document marks a fundamental step in regulating AI transparency in Europe. According to the timeline communicated by the European Commission, the publication of the first draft initiates a process leading to the finalisation of the Code by June 2026, when it will become the reference tool for providers and users of generative artificial intelligence systems.

This moment represents the realization of a process launched on November 5, 2025, when the European AI Office initiated a a collaborative process launched in November 2025 and expected to conclude with the finalisation of the Code by June 2026, involving independent experts, industry stakeholders, civil society, and academia. The first draft published today serves as the working foundation for the final code, set to become operational on August 2, 2026.

Structure of the First Draft

The document presented today is organized into two distinct sections, each dedicated to specific categories of operators and obligations:

First section - Marking and detection of AI content: aimed at providers of generative AI systems, this section establishes technical rules for marking AI-generated or manipulated content in machine-readable formats, enabling automatic detection.

Second section - Labelling of deepfakes and AI texts: addressed to deployers (users) of generative AI systems for professional purposes, this section defines methods for clearly labelling deepfakes and AI-generated or manipulated text publications on matters of public interest.

Next Steps in the Process

Today’s publication marks a crucial consultation phase: the Commission will collect feedback on the first draft from participants and observers until January 23, 2026. This period will allow stakeholders to examine in detail the technical and regulatory proposals contained in the document and provide constructive contributions.

Based on the feedback received, the publication of the second draft by mid-March 2026 is planned, followed by further consultation rounds that will lead to the finalization of the Code by June 2026. The rules on transparency of AI-generated content will enter into force on August 2, 2026, giving providers and deployers sufficient time to implement the necessary technical and organizational measures.

The Regulatory Context: The AI Act and Article 50

The first draft published today constitutes the practical implementation of Article 50 of the AI Act (EU Regulation 2024/1689), which represents the world’s first comprehensive regulatory framework for artificial intelligence. The AI Act, which entered into force on August 1, 2024, adopts a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited systems), high risk (subject to stringent requirements), limited risk (subject to transparency obligations), and minimal risk (unregulated).

Article 50, titled “Transparency obligations for providers and deployers of certain AI systems,” is the regulatory core that the draft Code of Practice published today is called upon to implement. This article establishes specific transparency requirements for providers and users of generative and interactive AI systems to reduce risks of disinformation, fraud, impersonation, and consumer deception while promoting trust in the digital information ecosystem.

Main Obligations of Article 50

Article 50 establishes several fundamental obligations:

For providers of generative AI systems: they must ensure that the outputs of their systems (audio, images, video, text) are marked in a machine-readable format and detectable as artificially generated or manipulated. The technical solutions employed must be practical, interoperable, robust, and reliable to the extent technically feasible, taking into account the specificities and limitations of various types of content, implementation costs, and the generally recognized state of the art.

For deployers of systems that generate or manipulate deepfakes: they must disclose that the content has been artificially generated or manipulated. A deepfake is defined as AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and could falsely appear to a person as authentic or truthful.

For deployers who publish AI-generated text: when text is published to inform the public on matters of public interest, it must be disclosed that it has been artificially generated or manipulated, unless the content has undergone human review or editorial control and a natural or legal person assumes editorial responsibility for the publication.

Exemptions: these obligations do not apply when use is authorized by law to detect, prevent, investigate, or prosecute criminal offenses. For evidently artistic, creative, satirical, fictional, or analogous content, transparency obligations are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

The First Draft: Contents and Objectives

The draft published today translates the obligations under Article 50 of the AI Act into operational terms, offering a practical, voluntary framework for demonstrating compliance with European regulations. The objective is to provide providers and deployers of generative AI systems with clear and technically achievable guidelines to ensure transparency of synthetic content.

Section 1: Marking and Detection for Providers

The first section of the Code focuses on the marking technologies and methodologies that providers of generative AI systems must implement. These obligations require that:

  • The outputs of AI systems (audio, images, video, text) should be marked in a machine-readable format and detectable as artificially generated or manipulated
  • The technical solutions employed should be practical, interoperable, robust, and reliable to the extent technically feasible, taking into account the specificities and limitations of various types of content, implementation costs, and the generally recognized state of the art

This section addresses crucial technical issues such as:

  • Digital watermarking standards
  • Structured and cryptographically secure metadata
  • Interoperability protocols between different platforms
  • Automatic detection technologies
  • Robustness against removal or alteration of markings

Section 2: Labelling for Deployers

The second section addresses deployers - professional users of generative AI systems - and establishes clear disclosure obligations when:

  • Deepfakes are used: deployers must disclose that the content (images, audio, or video resembling existing persons, objects, places, entities, or events) has been artificially generated or manipulated

  • AI-generated texts on matters of public interest are published: when text is published to inform the public on matters of public relevance, it must be disclosed that it has been artificially generated or manipulated

Exemptions provided: obligations do not apply when use is authorized by law to detect, prevent, investigate, or prosecute crimes. For evidently artistic, creative, satirical, fictional, or analogous content, transparency obligations are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

The Development Process: From November 5, 2025, to Today

The publication of the first draft today marks the culmination of the first work cycle, launched on November 5, 2025. On that date, the AI Office held an inaugural plenary meeting that brought together independent experts, initiating an unprecedented collaborative journey in complexity and inclusiveness.

The Code’s development process was organized around two dedicated working groups, which follow the structure of transparency obligations for AI-generated content in Article 50:

Working Group 1: Marking and Detection Techniques for Providers

Chaired by Professor Kalina Bontcheva from the University of Sheffield, this group focused on obligations requiring providers of generative AI systems to ensure that outputs are marked in a machine-readable format and detectable as artificially generated. This group’s work produced the first section of the draft published today.

Professor Bontcheva leads a research team of 20 people at the School of Computer Science, and her research focuses on detecting AI-generated misinformation, machine learning methods for misinformation and disinformation detection, online abuse analysis, hate speech detection, and large-scale real-time analysis of social media content.

Working Group 2: Disclosure of Deepfakes and AI-Generated Texts

Led by Professor Anja Bechmann from Aarhus University in Denmark, this group worked on disclosure obligations for deployers and produced the second section of the draft. Professor Bechmann is the director of DATALAB - Center for Digital Social Research, and her research examines platform collective behavior through large-scale trace data, with particular attention to challenges to democracy such as misinformation and deepfakes.

The vice-chair of this group is Giovanni De Gregorio, holder of the PLMJ chair of Law and Technology at Católica Global School of Law, whose expertise lies at the intersection of European law, constitutional law, and digital policy, with a focus on AI regulation.

Each working group operated to provide strategic leadership and guidance, ensuring that discussions remained focused and productive while balancing technical rigor and operational feasibility.

Stakeholders Involved in the Process

The drafting of the Code involves eligible stakeholders who responded to a public call launched by the AI Office. This group includes:

  • Providers of specific generative AI systems: companies that develop and provide generative AI technologies
  • Developers of marking and detection techniques: experts and organizations specialized in developing technical solutions to identify AI-generated content
  • Associations of deployers of generative AI systems: organizations representing users of these systems
  • Civil society organizations: groups representing citizens’ interests and the protection of fundamental rights
  • Academic experts: researchers and scholars with expertise in AI, law, ethics, and social sciences
  • Specialized organizations with expertise in transparency and huge online platforms: entities with experience in managing transparency and digital platforms

In its role as facilitator, the AI Office also invited international and European observers to participate in drafting the Code of Practice. These organizations did not meet the eligibility criteria of the call, but can still contribute valuable expertise and submit written input. All participants and observers will be invited to attend plenary sessions, working group meetings, and thematic workshops to discuss the technical aspects of the Code.

The Timeline: From First Draft to Final Code

The publication of the first draft today, December 17, 2025, marks a fundamental milestone in an articulated journey that will lead to the Code’s definitive adoption. The schedule provides:

  • November 5, 2025: Kick-off plenary - Launch of the development process
  • November 17-18, 2025: First working group meetings
  • December 17, 2025: Publication of the first draft (TODAY)
  • January 23, 2026: Deadline for collecting feedback on the first draft
  • Mid-March 2026: Expected publication of the second draft
  • June 2026: Finalization and approval of the Code of Practice
  • August 2, 2026: Entry into force of marking obligations for AI-generated content

This timeline, as outlined in the Commission’s official communications, provides sufficient time for providers and deployers to prepare for compliance before the rules enter into force in August 2026. The six-and-a-half-month period between the finalization of the Code and its mandatory application has been calibrated to allow organizations to:

  • Study technical requirements in detail
  • Implement necessary technological solutions
  • Train personnel
  • Update operational processes
  • Test marking and detection systems
  • Adapt disclosure policies

During the development process, several fundamental inputs were taken into consideration:

  • Feedback from the multi-stakeholder consultation on transparency requirements for specific AI systems (September-October 2025)
  • Expert studies commissioned by the AI Office
  • Contributions from eligible stakeholders participating in drafting the Code of Practice
  • International approaches and emerging technical standards

Parallel Commission Guidelines

In parallel with the development of the Code of Practice, the Commission will prepare guidelines to clarify the scope of legal obligations and address aspects not covered by the Code. This dual approach ensures that both voluntary (through the Code of Practice) and mandatory (through official guidelines) aspects are adequately addressed, providing a comprehensive framework for compliance.

Practical Implications of the First Draft

The publication of the first draft today provides, for the first time, concrete indications on how providers and deployers will need to operate. The practical implications are articulated on multiple levels:

For Providers of Generative AI Systems

The first section of the draft published today requires providers to implement advanced technical solutions. That entails:

Implementation of marking systems:

  • Development or adoption of digital watermarking technologies resistant to manipulation
  • Integration of structured metadata accompanying each generated output
  • Implementation of cryptographically secure identifiers
  • Creation of logging systems documenting content generation

Guarantee of interoperability:

  • Adoption of open standards for marking
  • Compatibility with different file formats and platforms
  • Readability by third-party detection systems
  • Complete technical documentation of implemented solutions

Robustness and reliability:

  • Resistance testing against attempts to remove or alter markings
  • Continuous monitoring of system effectiveness
  • Regular updates to counter new evasion techniques
  • Maintenance of audit logs

Economic and technical considerations:

  • Balance between implementation costs and effectiveness
  • Evaluation of the state of the art in technology
  • Adaptation to the specificities of different content types (audio, video, images, text)
  • Scalability of solutions

For Deployers of AI-Generated Content

The second section of the draft published today establishes clear obligations for those who use generative AI systems in professional contexts:

For deepfakes:

  • Implementation of visible and understandable warnings indicating the synthetic nature of the content
  • Strategic positioning of labels to ensure visibility
  • Unambiguous language in disclosures
  • Persistence of labels when content is shared or reused

For textual publications on matters of public interest:

  • Clear identification of AI-generated or manipulated content
  • Transparent disclosure of AI use in the creation process
  • Differentiation between entirely generated content and AI-assisted content
  • Internal documentation of processes

Balance with creative needs:

  • Respect for exemptions for artistic, satirical, or fictional content
  • Disclosure methods that do not hinder the enjoyment of the work
  • Flexibility in label presentation format
  • Consideration of editorial context

Organizational processes:

  • Staff training on disclosure obligations
  • Creation of workflows to verify and document AI use
  • Clear internal policies on when and how to disclose
  • Quality control systems to ensure compliance

For Online Platforms

Large online platforms, particularly those classified as Very Large Online Platforms (VLOPs) under the Digital Services Act, are likely to be required to integrate AI content marking and labelling mechanisms into their detection, moderation and transparency systems, in coordination with their existing obligations under the DSA.

In practical terms, this may translate into, inter alia:

  • Implement automatic detection systems for AI-generated content
  • Provide users with tools to identify synthetic content
  • Collaborate with AI providers to ensure interoperability of marking systems
  • Develop moderation policies that account for the specificities of AI-generated content

Challenges and Future Prospects

Technical Challenges

Effective implementation of the Code of Practice will need to address several technical challenges:

  1. Interoperability: ensuring that marking systems work across different platforms and technologies
  2. Evasion: preventing removal or alteration of markings by malicious actors
  3. Technological evolution: keeping pace with the rapid development of generative AI technologies
  4. Sophisticated detection: developing techniques capable of identifying increasingly realistic deepfakes

Balance of Interests

The Code will need to balance different interests:

  • Protection of citizens from disinformation and deception
  • Promotion of AI innovation
  • Protection of freedom of expression and artistic creativity
  • Protection of personal data and privacy

International Coordination

While the EU leads with this initiative, the global nature of AI will require international coordination. The Code of Practice could serve as a model for other jurisdictions, but transnational cooperation will be necessary to address content circulating across borders effectively.

Enforcement and Compliance

The effectiveness of the Code will depend on:

  • Clear enforcement mechanisms at the national level
  • Adequate resources for supervisory authorities
  • Cooperation between Member States
  • Proportionate but deterrent sanctions for non-compliance (the AI Act provides for fines up to 35 million euros or 7% of global annual turnover)

The Broader Context: AI Act and EU Digital Strategy

This initiative is part of a broader AI regulatory framework at the European level. The AI Act also includes:

  • AI Pact: a voluntary initiative helping stakeholders prepare for compliance
  • Code of Practice for General-Purpose AI: addressing large AI models
  • Regulatory sandboxes: testing environments allowing companies to develop and test general-purpose AI models before public release
  • Apply AI Alliance: a coordination forum for AI stakeholders and policymakers
  • AI Observatory: to monitor AI trends and assess AI impact in specific sectors

In November 2025, the European Commission also presented a digital simplification package including targeted amendments relevant to the implementation of the AI Act as part of the Digital Simplification Package, demonstrating a commitment to clear, simple, and innovation-friendly implementation of the AI Act.

Conclusions: A Historic Step Toward AI Transparency

The publication of the first draft of the Code of Practice today, December 17, 2025, represents a watershed moment in global artificial intelligence regulation. For the first time, the European Union does not merely establish general principles but also offers a detailed operational framework that translates legal obligations into concrete, technically achievable actions.

The Significance of the First Draft

This document, the result of over a month of intensive collaborative work among independent experts, industry, civil society, and academia, demonstrates that it is possible to build rules for AI through an inclusive, technically informed process. The bipartite structure of the Code - which clearly separates providers’ obligations from those of deployers - reflects a mature understanding of the AI value chain and the different responsibilities along it.

The Next Two Decisive Months

The consultation period, which opens today and runs until January 23, 2026, will be crucial. Stakeholders will have the opportunity to:

  • Test the technical feasibility of proposed solutions
  • Identify potential gaps or ambiguities
  • Propose improvements based on operational experience
  • Contribute to refining technical and regulatory language

The quality of feedback received will determine the robustness of the second draft expected in March 2026 and, ultimately, the effectiveness of the final Code.

Global Leadership

With this initiative, Europe is defining global standards for AI transparency. The Code of Practice, once finalized in June 2026, will likely serve as a reference for other jurisdictions facing similar challenges. Its voluntary nature, combined with strong incentives for compliance, represents an innovative regulatory model: flexible enough to adapt to technological evolution, yet rigorous enough to guarantee adequate protection.

The Remaining Challenges

Despite the critical progress represented by today’s publication, significant challenges remain:

On the technical level:

  • Ensuring marking systems resist evasion attacks
  • Developing solutions that work effectively on a global scale
  • Maintaining interoperability in a fragmented technological ecosystem
  • Constantly adapting to the evolution of generative capabilities

On the operational level:

  • Balancing implementation costs and societal benefits
  • Managing the transition for companies of different sizes
  • Coordinating enforcement across 27 Member States
  • Monitoring the effectiveness of adopted measures

On the cultural level:

  • Educating the public on the meaning of labels
  • Building trust in marking systems
  • Promoting a culture of transparency in the AI ecosystem
  • Balancing innovation and responsibility

Toward August 2, 2026

In the next eight months, until the rules enter into force in August 2026, the European AI ecosystem will undergo a profound transformation. Providers and deployers will need to implement technical solutions and rethink their processes, train staff, and build a new culture of transparency.

The first draft published today provides the roadmap for this journey. It is not a perfect document - and it should not be at this stage. Instead, it is a solid foundation upon which to build, through dialogue and iterative refinement, the regulatory framework that will guide Europe and, probably, the world toward more transparent and trustworthy AI.

A Model for the Future

The process that led to today’s publication - inclusive, technically informed, and open to participation from all relevant stakeholders - itself represents an innovation in how to regulate complex, rapidly evolving technologies. This approach, which balances regulatory certainty with the flexibility required by technological innovation, could become the model for future digital regulations.

The European Union, with the publication today of the first draft of the Code of Practice, once again demonstrates its determination to build a digital economy that is both innovative and respectful of fundamental rights. The path to full implementation is still long, but the direction is clear: artificial intelligence that serves humanity, with transparency, accountability, and trust at its core.

The world is watching. And the response Europe will give in the coming months, through the refinement of this Code and its practical implementation, will set the standards for global AI governance in the years to come.


Sources:


Related hashtags

#AIAct #IntelligenzaArtificiale #Deepfake #TrasparenzaAI #RegolamentoUE #ContenutoGeneratoIA #MarcaturaAI #EtichettaturaAI #AIOffice #CommissioneEuropea #AIGenerativa #TrasparenzaDigitale #PoliticheDigitali #InnovazioneResponsabile #GovernoIA #SicurezzaDigitale #DisinformazioneAI #EuropaDigitale