Beware of texts generated by artificial intelligence!

Articles5 March 2026
The use of artificial intelligence in legal practice opens up new opportunities, but it also poses risks when its use is transferred uncontrolled to the drafting of judgments and legal writings.

It is not surprising that the ubiquitous artificial intelligence (AI) has also made a strong entry into the legal world, revolutionising the way the industry operates. This represents a significant advance comparable to the digitisation of jurisprudence databases — my peers from my generation who had to search through shelves filled with compilations organised by year know what I am referring to — the sending of legal documents and court judgments in real time via email – instead of by fax or even by postal mail, as was previously used for correspondence with lawyers – or the digitisation of files, which allows us to work from anywhere in the world with a desk, a chair, and Wi-Fi.


Before AI, trials were already conducted with other technologies, such as virtual reality, which was used in January 2025 by a court in Broward County, Florida, which admitted as evidence a simulation of the events generated by virtual reality, which was reproduced at the hearing with Oculus Quest glasses, allowing the judge to see a recreation of the scene being judged; or criminal proceedings conducted by a court in Arizona, where a digital recreation generated by AI was reproduced in which the victim, Chris Pelkey (a veteran who died after a traffic dispute), intervened in the trial to express his forgiveness to the accused, which his family believed he would have done.


In addition to becoming a working tool for lawyers, AI itself can be the subject of litigation, as was the case with the wrongful death lawsuit filed against Character.AI in May 2025 by the mother of Sewell Setzer, a 14-year-old who committed suicide after engaging in emotional and sexual conversations with a chatbot imitating a character from the series 'Game of Thrones'. The lawsuit was accepted for processing by a Florida District Judge, who dismissed the argument of the defendant's lawyer that the First Amendment of the US Constitution (freedom of speech) should apply, and rejected the idea that the automated production of a virtual assistant could have constitutional rights.


However, where AI is having a truly significant impact in the legal field is in the drafting of legal documents, with those intended to take effect before a court (resolutions and defense writings) being particularly controversial, as the improper use of AI can lead to the inclusion of content that does not meet standards and, even, directly to pure fabrications, with the dangerous consequences this may entail.


Regarding court judgments, in October 2025, the Criminal Chamber of Esquel annulled a sentence issued by a court in Chubut (Argentina) because the text contained the phrase “Here you have point IV reedited, without citations and ready to copy and paste,” which demonstrated that AI had been used to draft the judgment. This decision was based on the lack of transparency regarding which AI system had been used, what data had been entered, or what instructions had been given, as well as the fact that the power to judge is a personal grant and cannot be delegated to an algorithm when it comes to decisions that affect people's freedom.


As for the writings submitted to the courts by lawyers, Spanish courts have already begun to issue rulings on the matter. Thus, the ruling of the Constitutional Court of 19 September 2024 unanimously decided to sanction a lawyer for failing to show due respect to the Court by including in a protection appeal 19 citations of jurisprudence “that were cited as if they were real when, in fact, they did not exist”.


Similarly, the Superior Court of Justice of the Canary Islands issued ruling 126/2025 on 22 December 2025, which sanctioned a lawyer who included in his appeal citations attributed to the Supreme Court that did not exist and invoked a fictitious report from 2019 by the General Council of the Judiciary on the credibility of child testimony, which was described as “unrestrained legal creativity” and “flagrant negligence,” and therefore was considered not a simple mistake, but a repeated practice that warranted disciplinary action; as well as the recent Order of 10 February 2026, which fined a lawyer €420 for citing up to 48 false judgments suggested by AI, with the amount of the fine determined by the recognition of the facts and the remorse of the lawyer.


In contrast, Order 2/2024, dated 4 September 2024, issued by the Superior Court of Justice of Navarra, decided not to sanction a lawyer who cited an article from the Colombian Penal Code in a complaint as if it were from the Spanish Penal Code, having used ChatGPT to draft the document. On this occasion, the absence of sanctions was also based on his immediate reaction, in which he apologised and admitted the error, so any intention to deceive was ruled out, without prejudice to the warning that the improper use of AI may constitute bad faith in litigation.


Therefore, there is no doubt that we are in a new ecosystem where we coexist with technologies that can help us be more effective and productive professionals, or that can lead us to make mistakes with unpredictable consequences if we give in to the temptation to allow content to be generated without supervision. As with all human advancements since the mastery of fire or the invention of metallurgy, AI is not dangerous in itself, but it can be if used without the necessary control.


Read the full article here.

Edificio moderno con una fachada distintiva de ventanas.
  • Artificial Intelligence

Related partners

LATEST FROM #ECIJA