Legal challenges of Artificial Intelligence
The European regulation on AI establishes a classification of systems according to their level of risk (minimal, limited, high or unacceptable) and imposes proportionate obligations in terms of documentation, transparency, supervision and risk mitigation.
This regulation is in addition to other existing regulations such as the General Data Protection Regulation (GDPR) applicable when AI processes personal data, as well as sectoral regulations affecting areas such as healthcare, financial services or human resource management.
Compliance with these regulatory frameworks requires a review of companies' internal processes, especially with regard to the lawfulness of data processing, traceability of algorithmic models and liability in case of erroneous or discriminatory automated decisions.
The business use of artificial intelligence brings with it multiple legal challenges that require strategic and preventive responses from the legal sector.
Among the most relevant are:
- Data protection and privacy: AI requires volumes of data to train predictive or generative models. When this data includes personal information, the full application of the GDPR is triggered, with requirements on legal transparency of processing, data minimisation and security measures. It is not only a matter of complying with legal obligations but also of designing systems that integrate the principles of privacy by design and by default from their conception.
- Intellectual property: the training of AI systems with copyrighted content raises the question of who owns the ownership of the results generated. Given that AI is not considered a subject of law, it is up to the developers or users to contractually establish the limits of use, transfer or exploitation of these results, as well as the guarantees against possible infringements.
- Transparency: one of the biggest legal and ethical challenges lies in the lack of transparency of algorithms. The new European regulation requires high-risk AI systems to have technical documentation to justify their decisions and to allow for external audits.
- Civil and criminal liability: the use of AI does not exempt from liability, both developers and business users can be held liable for damages caused by automated wrong decisions. Insofar as liability is one of the main challenges, mechanisms are established to monitor and provide for protocols for action in the event of errors, biases or unintended consequences.
Moreover, beyond regulatory compliance, the responsible use of AI involves integrating ethical principles into its design and implementation. Organisations must adopt policies that go beyond what is legally required, especially with regard to:
- Non-discrimination: Biases in training data can carry over into the operation of systems, leading to discrimination on the basis of gender, age, race or other factors. Algorithmic audits and continuous review of models are essential to detect and mitigate these deviations.
- Meaningful human oversight: the EU Regulation imposes the need to ensure that critical decisions are not taken exclusively by automated processes without human intervention or review.
- Trust and social transparency: the legitimacy of the use of AI derives not only from its legality but also from its acceptance by society.
Against this backdrop, regulatory and ethical compliance should not be a reactive response but a preventive and cross-cutting strategy.
Thus, some key measures that companies can adopt include the design of internal compliance policies, legal and ethical impact assessments, continuous recruitment of legal and technical teams, regular audits, as well as the selection of responsible technology providers that comply with the required legal standards and offer transparency on the operation of their solutions.
One of the great challenges of the current year 2025 is the need for interdisciplinary training. Lawyers need to understand not only the regulatory frameworks but also the technical fundamentals.
It is already well known that artificial intelligence is part of the present of our organisations, but its future will depend on how we know how to regulate, use and supervise it.
The legal challenge in 2025 is not only technical, it is ethical, educational and regulatory. As legal professionals, we have the responsibility to ensure that technological progress does not erode fundamental rights, but rather that it strengthens them through a legal framework that is solid, fair and adapted to the new times.
Article written by Ana Luengo, from the area of Economic Criminal Law.