First draft of the Code of Good Practice for Transparency in AI-Generated Content

Articles31 December 2025
The European Commission has published a first draft of the Code of Practice that develops the transparency obligations of Article 50 of the AI Regulation for content generated or manipulated by artificial intelligence systems.

This first draft of the Code of Good Practice seeks to establish guidelines to ensure transparency in content generated or manipulated by AI systems, in compliance with Article 50 of the AI Regulation. The aim is to facilitate the identification of synthetic content, protect fundamental rights and strengthen trust in the digital ecosystem. The text has been developed through a collaborative process involving multiple stakeholders (industry, academia, civil society) and is structured around two working groups:

  • Working group 1: requirements for the labelling and detection of AI-generated content.
  • Working group 2: disclosure requirements for deepfakes and AI-generated texts.

Key points:

  • Multi-layered labelling: AI system providers must apply combined techniques (metadata, imperceptible watermarks, certificates of origin) to ensure that content is detectable as artificially generated or manipulated.
  • Accessible and verifiable detection: providersare required to offer tools (APIs or public interfaces) so that third parties can verify the authenticity of content, including forensic mechanisms that do not rely solely on active markings.
  • Open standards and interoperability: the Code of Practice promotes the creation and adoption of European and international standards to ensure that solutions are interoperable and proportionate, especially for SMEs.
  • Clear labelling of deepfakes and texts of public interest: those responsible for deployment must use common icons and harmonised taxonomies to indicate whether the content is entirely AI-generated or assisted, ensuring accessibility (e.g. audio for visually impaired people).
  • Exceptions and proportionality: exceptions are provided for legal uses (e.g. criminal investigation) and for artistic, creative or satirical works, applying transparency in a way that does not affect the quality or normal exploitation of the work.
  • Compliance and training: both providers and those responsible for deployment must implement internal compliance frameworks, periodic testing, staff training and cooperation with supervisory authorities.

This draft is a first step towards a common European framework to ensure transparency in AI-generated content. Although it still requires adjustments and input, it establishes essential principles to protect information integrity, promote public trust and comply with the AI Regulation.


Access the full article published by the European Commission here.

Siluetas de personas en un espacio arquitectónico iluminado por grandes ventanales.
  • Artificial Intelligence

Related partners

LATEST FROM #ECIJA