Criminal liability in the use of artificial intelligence and deepfakes

Articles8 April 2026
In this context, artificial intelligence has established itself as one of the most disruptive tools, capable of generating content with a degree of realism that questions the traditional distinction between truth and falsehood.

The misuse of deepfakes has led to new forms of infringement of traditional legal rights, such as honor, privacy, personal image, sexual freedom, and public safety. In light of this situation, criminal law and digital law are forced to rethink their traditional categories in order to provide adequate responses without undermining fundamental principles such as legality, culpability, and minimal intervention.


Artificial intelligence enables the creation and manipulation of images, audio, and video through algorithms capable of imitating the physical characteristics, voices, and gestures of real people with great accuracy. Deepfakes themselves do not constitute criminal behavior, as their use can be legitimate in artistic, educational, or scientific contexts. However, when these tools are used to deceive, harm, or exploit others, they become an ideal means to commit crimes.


One of the main problems posed by the criminal use of deepfakes is the simultaneous violation of multiple legal rights. The creation and dissemination of false content without consent directly infringes on the right to privacy and personal image, especially when the material is of a sexual nature. In such cases, digital violence takes on a particularly serious dimension, as the harm multiplies due to the viral nature of platforms and the difficulty of completely removing content once it has been disseminated.


Additionally, deepfakes can be used to commit fraud, extortion, or threats by simulating messages or statements falsely attributed to the victim. In the political and institutional realm, the dissemination of falsified videos or audio recordings of public officials can generate widespread misinformation, undermine social trust, and endanger democratic stability. In this way, artificial intelligence becomes a tool capable of amplifying the scope and severity of already known criminal behaviors.


From a criminal law perspective, determining authorship and criminal liability constitutes one of the central challenges. In the production chain of a deepfake, various parties may participate: those who develop the software, those who market or make it available, those who generate the content, and those who disseminate it. The principle of personal criminal liability requires a case-by-case analysis of the specific conduct, the subjective element, and the degree of participation of each intervenor.


In general terms, criminal liability will fall on those who create or disseminate false content with knowledge of its fraudulent nature and with the intention of causing harm or obtaining undue benefit. However, the anonymity characteristic of many digital environments, along with the transnational nature of the internet, complicates the identification of those responsible and raises serious questions of jurisdiction and international cooperation.


Another relevant aspect is the classification of these actions in criminal law. In many legal systems, crimes committed through deepfakes can be subsumed under traditional criminal offenses, such as crimes against honor, threats, fraud, or sexual offenses. However, there are cases in which this subsumption is insufficient or forced, which has generated doctrinal and legislative debates about the advisability of creating specific criminal offenses related to digital manipulation through artificial intelligence.


The creation of new crimes must be approached with caution to avoid excessive expansion of criminal law that compromises freedom of expression and other fundamental rights. Criminal law, as a last resort, cannot be the only response to the risks posed by artificial intelligence, but must be complemented with mechanisms from civil and administrative law, as well as public policies aimed at prevention and digital education.

In the procedural realm, digital evidence plays a decisive role. Establishing the authenticity or falsehood of content generated by artificial intelligence requires specialized technical knowledge and digital forensic analysis tools. Proper preservation of evidence, compliance with the chain of custody, and training of legal professionals are essential to ensure well-founded judicial decisions that respect procedural guarantees.


In conclusion, the use of artificial intelligence and deepfakes for criminal purposes represents a current and complex challenge for criminal law and digital law. The legal response must be balanced, avoiding both impunity and over-criminalization. Only through reasonable regulatory adaptation, adequate technical training, and effective cooperation among states will it be possible to address new forms of digital crime legitimately and effectively.


Article written by Ana Luengo, associate of the Economic Crimes practice group.

La imagen muestra la esquina superior de un edificio moderno contra un cielo claro.

LATEST FROM #ECIJA