Misuse of deepfakes in Spain - is the fine sufficient?
The Spanish Data Protection Agency (AEPD) recently issued a resolution regarding the dissemination of images altered by artificial intelligence. In these images, real faces were inserted into digitally generated nude bodies and then disseminated through social networks and messaging services.
In its analysis, the AEPD established that the image - even when it has been digitally altered - remains personal data if it allows a person to be identified. Consequently, the manipulation and dissemination of these images constitutes a processing of personal data under the terms of the General Data Protection Regulation (GDPR). In addition, the Agency was clear in stating that the processing was carried out without any basis of lawfulness in the absence of consent, legal obligation or any other condition that would legitimise the processing, which clearly makes it unlawful.
Likewise, the content of the resolution contains relevant elements about the affected persons, which allows us to understand the particular seriousness of the case. In this way, it can be inferred that the data correspond to minors, since the AEPD refers to the fact that those affected have enhanced protection - a category that the regulations reserve exclusively for minors - and expressly cites Article 84 of the Organic Law on the Protection of Personal Data and Guarantee of Digital Rights (LOPDGDD), relating to the protection of minors on the Internet. In addition, the decision mentions the "parents" as the legal representatives of the offender, reinforcing that interpretation.
Finally, in the light of the facts set out, the AEPD decided to initially set a fine of 2,000 euros, which was reduced to 1,200 euros after the offender acknowledged liability and made voluntary payment, mandatory reductions recognised in the regulations.
In this way, this decision sets an important precedent: it is the first sanction that directly addresses the use of deepfakes, i.e. images generated or altered by artificial intelligence. Deepfakes allow faces and bodies to be combined or modified with a high level of realism, making it extremely difficult to distinguish what is real and what is fake. Although the technology behind deepfakes has legitimate applications, their misuse, especially involving minors, constitutes a serious infringement of their rights.
Despite the above, one of the most striking elements of the case is the sanction imposed. The 1200 euros - around one million Chilean pesos - is clearly insufficient in view of the seriousness of the facts. Considering that artificial intelligence techniques were used to generate false sexual content and that everything points to the fact that the victims were minors, the fine is derisory in relation to the impact and damage that this type of practice can cause. The magnitude of the risk, the possibility of viralisation and the difficulty of reversing the dissemination of this content show that the sanctioning framework is not sufficient to adequately respond to a technology that exponentially amplifies the consequences.
In short, this case should prompt reflection and change in Chile. With the coming into force of Law N°19.628, modified by Law N°21.719, it will be essential that the sanctioning and supervisory framework is prepared to deal with practices of this type. Even more so, when in our country there have already been serious cases, such as the one that occurred in 2024 at Saint George's School, where false images of students created with artificial intelligence were disseminated. Having a robust institutional framework and truly dissuasive tools will be key to face the challenges posed by artificial intelligence and data protection in our country.