Who is legally responsible when AI generates false information?

Articles28 April 2026
Liability, professional duty, and legal limits regarding AI hallucinations within the European regulatory framework.

In the EU, AI errors do not constitute a separate category of liability under the regulations. Avoiding hallucinations is a business guarantee for general-use systems.

"We are very sorry for what happened." This was the message from Andrew Dietderich, head of the restructuring department at Sullivan & Cromwell, in a letter sent last week to federal judge Martin Glenn in New York regarding the errors made by the artificial intelligence platform that the firm used to draft documents for a client's case, which included incorrect citations of US legislation.


In addition to the evident damage to the reputation of one of New York's elite law firms, this case has raised questions about what might have happened in Europe, where AI regulation is much more advanced.


The Sullivan & Cromwell case illustrates a legal problem that is still in its infancy, where the key lies not so much in the 'hallucinations'—as the errors made by AI in its responses are referred to—but in the chain of responsibility that is activated when these errors go beyond mere drafts.


Within the European framework, there is no separate category of 'liability for hallucinations.' As Juan Carlos Guerrero explains, a partner specializing in intellectual property (IP) and technology, media, and telecommunications (TMT) at ECIJA,  "this is constructed by fitting the damage to classic liability regimes." In other words, the analysis shifts to areas of contractual liability, tort liability, or, in some cases, liability for defective products.


Furthermore, the ECIJA expert continues, it is necessary to determine “who controlled the risk at each stage of the chain (provider, integrator, professional user), which specific duty was breached (duty to disclose, duty of due diligence in design, duty of human oversight, etc.) and whether a sufficiently strong causal link can be demonstrated between the false outcome and the harm.”


When the party harmed by the alleged errors made by the AI is the user itself, the focus shifts to the contract. “The starting point is not that the AI 'lied', but whether the provider failed to deliver what was promised,” Guerrero points out, highlighting aspects such as reliability, warnings, or the need for human oversight. In this context, technical evidence is decisive, as without traceability or usage records, proving fault becomes particularly complex.


The scenario changes when the harm affects third parties, such as the clients of a law firm. In these cases, liability typically rests with the professional who used the AI, as “the lawyer did not merely consult a tool, but turned a probabilistic result into a procedural fact, presenting something that was not real as actual jurisprudence.” As Joaquín Muñoz, partner responsible for privacy and data protection at Bird & Bird, explains, reviewing the results generated by AI “is not just a recommendation of best practices, but an ethical and legal requirement.”


At the same time, although this human intervention—or lack of oversight—weakens the direct link with the technology provider, one must consider the recent European directive that broadens the concept of product to include software, although it sets significant limits. As the Ecija partner points out, “information is not considered a product,” which makes it difficult to fit certain hallucinations within this framework.


At the same time, the European regulatory framework, particularly the AI Act, imposes certain obligations on developers to prevent or mitigate hallucinations, although, as the Bird & Bird partner explains, “the strictest measures primarily affect high-risk systems.” These models must ensure adequate levels of accuracy, robustness, and cybersecurity, as well as implement control, auditing, and risk management mechanisms throughout the system's lifecycle, including the identification and mitigation of foreseeable errors. “They are even required to report accuracy levels in the product's usage instructions,” he adds.


However, for general-use models, the obligations focus mainly on transparency and technical documentation; thus, the obligations related to the anticipation or mitigation of hallucinations, as Muñoz explains, “are more of a commercial guarantee than a legal requirement or guarantee for general-use systems.” Nonetheless, “the provider of the AI system has the utmost interest in ensuring that the system functions correctly,” he adds.


Despite these prevention mechanisms, “it is important to understand the risks associated with any integration of an AI system and manage them accordingly,” Muñoz concludes.


Consequences for a lawyer for 'believing' everything an AI says

“A clear breach of basic professional duties.” This is how Juan Carlos Guerrero, a partner at Ecija, describes how a case similar to that of Sullivan & Cromwell would be viewed in Spain.

“We are not dealing with a simple technical error, but with a breach of the essential rules governing the practice of law, which opens three fronts: disciplinary responsibility before the bar association, possible sanctions from the same court during the proceedings, and, in more extreme scenarios, even criminal liability if there are additional elements (for example, fraudulent conduct or material deception),” explains Guerrero, who notes that lawyers have a duty to act with truthfulness, diligence, and loyalty before the courts in accordance with the General Statute of the Spanish Bar and the Spanish Code of Conduct for Lawyers. “Presenting non-existent judgments as if they were real directly violates this standard,” he emphasizes.


Furthermore, depending on the case, the harm caused, and whether it constitutes provable damage, “the client could file a civil liability claim for professional negligence,” warns Joaquín Muñoz, a partner at Bird & Bird.


Sanctions can range from a reprimand or a fine to the suspension of the practice of law (including several months or years in severe cases) and, in extreme cases, expulsion from the bar association, as the partner based in Ecija explains, although this will depend on many factors such as intent, the relevance of the false jurisprudence to the case, and, above all, the lawyer's response. In this regard, as Muñoz points out, the Superior Court of Justice of the Canary Islands has already imposed a fine of €420 on a lawyer for citing up to 48 non-existent judgments suggested by the AI.


The damage to reputation, as explained by the Bird & Bird partner, is another consequence that highlights a lack of rigor in the use of AI tools and in the oversight of legal work.


Read the full article here.

Una tienda vintage con un escaparate iluminado en rosa, mostrando el texto 'PHOTOS'.
  • Artificial Intelligence

Related partners

LATEST FROM #ECIJA