Companies will be required to label AI-generated content starting from August 2026

Articles17 September 2025
Starting from August 2, 2026, the obligation to label content generated by artificial intelligence will come into force. The European AI Regulation demands clear and visible warnings, in addition to a technical marking detectable by machines.

The countdown has begun. Under the European Artificial Intelligence Regulation (RIA), from August 2, 2026, all companies will be required to explicitly label content created by AI systems, whether texts, images, audios, or videos. The goal: to ensure transparency and avoid confusion about the origin and authenticity of digital materials.


The RIA establishes a dual labeling system: one machine-readable for synthetic audio, image, video, or text content, and another visible for people in cases where they directly interact with AI, such as deepfakes or public interest informative publications.


As Juan Carlos Guerrero, partner of TMT: IT/IP at ECIJA explains, the key lies in the relevance of the technology’s contribution: 

“Hybrid content should only be labeled when the intervention of AI is substantial and could mislead about its origin or authenticity. What matters is not a percentage, but the significance of the modification introduced by AI.”


Juan Carlos adds that, in informative texts with public interest, the warning may not be necessary if there has been human review under editorial responsibility. However, he warns that publishing without human oversight can lead to the dissemination of incorrect, biased, or even illegal information.


Frequent Mistakes and Legal Risks

Experts agree that companies often make recurring mistakes: lack of oversight, use of superficial warnings that can be easily eliminated, or absence of a clear strategy regarding the use of AI. These mistakes also increase legal risks, such as copyright infringement, privacy violations, or misleading business practices.


Furthermore, sectors such as media, entertainment, digital marketing, technology platforms, and e-commerce are the most exposed to these obligations.


Steps for Compliance

Key recommendations for complying with the regulations include:


  • Clear protocols to identify when labeling is required.
  • Visible warnings to the user and technical marking in metadata.
  • Continuous human oversight, especially in deepfakes or news.
  • Internal documentation of processes and decisions for audits.
  • Layered strategy, combining standards like C2PA with cryptographic signatures and metadata protection policies.


In Juan Carlos's words, the challenge is not just to comply with the regulations but to generate trust in the digital ecosystem:

“Companies must integrate transparency into their content creation processes, not as a formal requirement but as a commitment to ethics and responsibility in the use of AI.”


With less than a year until the obligation comes into force, the time to prepare is limited. Companies must already adapt their protocols and systems to avoid sanctions and ensure the legitimacy of their content.


Consult the full article here

Una vista arquitectónica abstracta en blanco y negro que muestra estructuras curvas y geométricas.
  • Artificial Intelligence

Related partners

LATEST FROM #ECIJA