Experts predict a wave of claims for misuse of AI
Artificial intelligence has become one of the main areas of regulatory concern on a European scale. In a context marked by recent cases highlighting the risks arising from the misuse of personal data, legal experts and regulators anticipate a significant increase in enforcement activity in the coming months.
This was one of the central topics of the debate held in Madrid on the occasion of the presentation of the sixth edition of the Practical Guide to Artificial Intelligence, published by Aranzadi LA LEY, a meeting that brought together representatives of public authorities, specialized lawyers, and technology sector professionals to analyze the regulatory challenges associated with the development and use of AI systems.
Among the speakers, Alejandro Touriño, managing partner of ECIJA, addressed the complex relationship between the General Data Protection Regulation (GDPR) and the Artificial Intelligence Regulation (AIR), two high-level regulations that, while sharing objectives of protection, respond to different logics.
GDPR and AIR: regulatory duality with organizational impact
During his intervention, Alejandro Touriño highlighted that the GDPR and the AIR “protect different legal assets” and came into force at different times, which has led to fragmented and, in some cases, uncoordinated responses from many organizations. This regulatory duality, he explained, does not promote legal certainty and poses significant internal governance challenges for companies developing or using artificial intelligence systems.
In this sense, the managing partner of ECIJA emphasized the need to design integrated compliance models capable of simultaneously addressing privacy requirements, data protection, and technological regulation, avoiding partial or exclusively technical approaches.
AI Governance: differentiated profiles and a legal-technical approach
One of the most relevant aspects of the debate was the role of new governance figures in organizations. Alejandro Touriño advocated for a clear differentiation between the data protection officer (DPO) and the AI director (CAIO), pointing out that they are roles with different functions and responsibilities.
While the DPO safeguards a fundamental right like the protection of personal data, the CAIO, he explained, is responsible for a technology. Therefore, he warned that assigning this role exclusively to technical profiles may be insufficient and generate significant legal risks, as key regulatory issues are left unaddressed.
This vision reinforces the multidisciplinary approach that ECIJA has advocated in the field of artificial intelligence, combining advanced legal knowledge, technological understanding, and experience in data governance.
Towards greater regulatory pressure
The meeting also highlighted that, although enforcement activity in the field of AI has been limited so far, experts anticipate a gradual tightening of controls by authorities, especially as these technologies become more widespread and accessible to a larger number of users.
Recent cases have demonstrated how AI systems can amplify existing risks regarding privacy, misattribution of facts, the use of sensitive data, or the generation of illegal content, reinforcing the need to anticipate these issues from the design and governance stage.
Access the full article published in Cinco Días here.