Artificial Intelligence in Compliance Programmes: A Pathway to Exemption from Criminal Liability?
The integration of artificial intelligence (AI) in the business world is no longer a forecast of the future but a tangible reality that transforms processes and redefines strategies. One of the areas where its impact is most promising is in compliance programmes. However, this technological evolution raises a fundamental legal question: can an AI-based compliance model be sufficient to meet the effectiveness requirements of the Criminal Code and thus exempt the legal entity from criminal liability?
To address the question, it is essential to start from the legal framework established in article 31 bis of the Criminal Code, consolidated after the reform of Organic Law 1/2015. This rule establishes that a legal person may be criminally liable for offences committed in its name or on its behalf, and for its benefit, in two ways:
- Acts of its directors: Offences committed by its legal representatives or by those who have powers of organisation and control.
- Acts of its employees: Offences committed by subordinates, when there has been a serious breach of the duties of supervision, monitoring and control by the managers.
The key to exemption from this liability lies in the adoption and effective implementation, prior to the commission of the offence, of organisational and management models that include surveillance and control measures suitable for preventing offences or significantly reducing the risk of their commission.
Artificial intelligence offers tools that can theoretically boost the effectiveness of a compliance programme exponentially. Advanced systems can: Analyse large volumes of data (Big Data) to detect anomalous patterns or suspicious transactions that would go unnoticed by a human supervisor.
Monitor communications and operations in real time, proactively identifying risks. Automate controls, reducing human error and ensuring consistent application of company policies. This technological drive is consistent with the promotion of the use of AI promoted by various regulations, such as Law 6/2024 of 5 December on administrative simplification, which places AI at the heart of administrative processing.
Despite its potential, the delegation of control functions to an AI system does not automatically guarantee exemption from liability. The judicial assessment of the "effectiveness" of a compliance model goes beyond mere technical sophistication. Circular 1/2016 of 22 January on the criminal liability of legal persons of the State Attorney General's Office is clear in stating that compliance programmes should not be a "safe conduct for impunity", but the expression of a "true ethical business culture".
In this context, several key challenges arise: The "black box" problem: If an algorithm makes control decisions whose internal processes are opaque or inexplicable, how can the company prove in court that it has exercised due diligence? Regulations, such as the Comprehensive Equal Treatment and Non-Discrimination Act, insist on transparency and interpretability of algorithms. The need for human supervision: The law requires the breach of control duties by natural persons to be "serious". Circular 1/2016 stresses that supervision is a non-delegable function of the body.
In short, the implementation of a compliance system based on artificial intelligence does not, by itself, constitute an automatic exemption from criminal liability of the legal person. Although AI represents a tool with undeniable potential to optimise the prevention and detection of criminal risks, its mere presence does not satisfy the requirements of Article 31 bis of the Criminal Code.
The judicial assessment of the effectiveness of an organisational and management model transcends mere technological implementation. As Circular 1/2016, of 22 January, on the criminal liability of legal persons under the reform of the Criminal Code carried out by Organic Law 1/2015, underlines, the analysis focuses on the existence of a true corporate ethical culture and on the real effectiveness of the model to prevent crimes. The delegation of control functions to an algorithmic system does not exempt the management body from its duties of supervision, monitoring and control; on the contrary, it imposes a new diligence: that of understanding, auditing and actively supervising the AI tool itself to mitigate risks such as opacity or biases.
Artificial intelligence should therefore be considered as an advanced and sophisticated component within the organisational and management model, but not as a substitute for human judgement and accountability. The final decision on exemption will always rest on a case-by-case judicial assessment, where technology will be a means to be proven and not an exculpatory end in itself.
Article written by Lydia García, lawyer at ECIJA Madrid.