Regulating artificial intelligence: a first step in industrial property, many challenges ahead

Artículos1 October 2025
Artificial intelligence moves from experimental to legal: a landmark reform will sanction abuses and unfair competition generated by algorithms.

The Mexican Presidency sent to the Senate an initiative that seeks to reform 217 articles of the Federal Law for the Protection of Industrial Property, add 23 and repeal 6, in order to regulate, for the first time, the uses of artificial intelligence in this area. The proposal recognises that AI can be used to generate misleading content, replicate trademarks, copy industrial designs or take advantage of trade secrets, and seeks to close legal loopholes that currently allow these abuses to go unpunished. The bill proposes administrative sanctions and grants new powers to the Mexican Institute of Industrial Property (IMPI) to secure products related to these practices.


Among the proposed measures is the idea of strengthening the powers of the Mexican Institute of Industrial Property (IMPI) to act preventively. The agency would be able to secure products that are linked to violations committed through the use of AI, which represents an important step in providing a rapid response to right holders. This would provide IMPI with agile tools to protect trademarks, patents and designs in an ecosystem where the speed at which technology is advancing is beginning to outstrip the law's ability to react.


The focus on administrative infringements -and not on crimes- has a relevant nuance. This design seeks more streamlined procedures and inexpensive sanctions, which may be more effective than taking cases to criminal courts, which tend to be longer and more complex. However, the initiative also raises a complex technical question: how does one prove that an infringement was committed by AI and not by conventional methods? An industrial design copied with AI may be indistinguishable from one copied manually. A mark counterfeited by algorithmic generation may appear identical to one created by a human designer.


IMPI will face unprecedented evidentiary challenges. Determining whether a logo was generated by Midjourney, whether an industrial design comes from unauthorised training of models, or whether an invention was 'inspired' by protected datasets, will require sophisticated technological expertise that does not exist at the institute today. The reform grants powers of injunctive relief, but what to ensure when the infringement is in the code, in the training data, or in the prompts used? Inspectors will need access to system logs, file metadata, prompts histories and model architectures. IMPI will need to develop AI-specific digital forensic capabilities, something no Mexican agency has done yet.


More complex still, how to distinguish between coincidental similarity and intentional infringement when AI models can generate similar outputs without direct access to protected works? The burden of proof becomes a legal-technical labyrinth.


This evidentiary complexity is not unique to Mexico; it is appearing everywhere. In Europe, the AI Act has already put in place a scaffolding that obliges more when the risk is higher - for example in biometrics, critical infrastructure, education, employment or access to essential services - and demands reinforced controls on those who develop and use these systems.[1] In the US, without a comprehensive federal law, the path has been one of guidelines and cases: the USPTO published inventorship criteria for AI-assisted inventions (patenting is possible if there is significant human contribution) and has cautioned the practice about disclosing the role of AI in filing applications; in addition, courts and offices have reiterated that the "machine" cannot be listed as the inventor.[2] At the same time, the consumer authority has begun to sanction misleading uses, for example, in August 2025, the FTC sued Air AI for inflated promises of growth and revenue linked to its technology. 3]


Two practical lessons for our context emerge from these fronts: first, that evidentiary standards are tending to call for traceability and identifiable human responsibility (who did what and with what data); second, that internal governance is beginning to come under scrutiny. In practice, this points to frameworks such as NIST's AI Risk Management Framework - voluntary but increasingly cited - and ISO/IEC 42001-type management systems, which mandate policies, controls and audits over the AI lifecycle.[4] With that mapping, IMPI does not want to be a "one-size-fits-all" AI risk management system, but rather a "one-size-fits-all" AI risk management system.


With this map, IMPI does not start from scratch: it can rely on the European classification to calibrate risk and on the US experience to prosecute deceptive practices, while the private sector adopts internal management and testing methodologies that facilitate, when necessary, accrediting the origin and legality of its models.


On the other hand, in our country, the Supreme Court of Justice of the Nation, in the direct appeal 6/2025, ruled that, under current Mexican legislation, works generated by artificial intelligence cannot be registered as works protected by copyright, since authorship is reserved exclusively to natural persons. The case under analysis concerned the refusal of the National Copyright Institute (INDAUTOR) to register an AI-generated graphic work, a decision that was upheld by the Federal Court of Administrative Justice and, finally, by the SCJN. The Court held that both the Federal Copyright Law and the applicable international treaties (including the Berne Convention) recognise only human beings as subjects of copyright, expressly excluding artificial entities. Likewise, the SCJN emphasised that creativity, originality and moral and economic rights over works are prerogatives inherent to human nature, so it is not possible to recognise intellectual rights to AI systems.


The Mexican government's initiative is a necessary step and the timing is right, but it also marks just the beginning of a more complex conversation that must also be much broader. Artificial intelligence has become a cross-cutting actor because it not only touches on trademarks or designs, but also personal data, copyright, civil liability and even due process. Limiting the conversation to a single chapter of the law would be an understatement in the face of a technology that permeates practically all branches of law, which is why broadening the discussion becomes a practical necessity for Mexico to have a coherent and up-to-date framework.


We must now ask ourselves whether the government and its institutions are ready for this challenge. Because regulating artificial intelligence is not just a matter of passing laws. It implies providing the authorities with technical capacities to audit algorithms, analyse datasets and evaluate technological expertise that until recently were unthinkable in an administrative procedure. The challenge is not a minor one. In a world where the evidence may be in a system log or in the metadata of a file, institutions will have to reach a level of forensic sophistication that does not seem to be within their reach today.


For all these reasons, it is essential to bring the issue to the legislative arena in a broad and urgent manner. Mexico needs to seriously discuss what it wants from artificial intelligence, how it will supervise it and what resources it will allocate to it. Without such "technical power", any reform risks being more of an obstacle than a solution. A law without enforcement capacity does not provide certainty; on the contrary, it generates legal uncertainty and slows down the innovation it seeks to protect. Public discussion must recognise this point: regulate, yes, but with solid foundations and sufficient tools so that the law does not become a dead letter.


For companies, this new scenario sends a clear signal that the use of AI without internal controls can lead to serious legal consequences, generating a need not only to avoid legal controversies, but also to build internal technological governance policies that align innovation with ethics within a legal framework. Business leaders need to anticipate risks, establish protocols for transparency in the use of algorithms and work hand in hand with their legal advisors to shield their operations.


In this new scenario, countries and companies that balance technological creativity with respect for the law will lead the way. Regulation, with the appropriate technical and technological support, will be the basis that will provide the confidence for artificial intelligence to become a legitimate driver of global competitiveness and development.



[1] EU AI Act: first regulation on artificial intelligence. The use of artificial intelligence in the EU is regulated by the AI Act, the world's first comprehensive AI law. Find out how it protects you.(https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence)

[2] Examination guidance; request for comments. Department of Commerce Patent and Trademark Office - United States Patent and Trademark Office, Department of Commerce.(https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions)

[3] FTC Sues to Stop Air AI from Using Deceptive Claims about Business Growth, Earnings Potential, and Refund Guarantees to Bilk Millions from Small Businesses(https://www.ftc.gov/news-events/news/press-releases/2025/08/ftc-sues-stop-air-ai-using-deceptive-claims-about-business-growth-earnings-potential-refund)

Una mujer de perfil con una proyección en su rostro y un fondo desenfocado que muestra un ambiente moderno.
  • Inteligencia Artificial
Download in pdf

Related partners

LATEST FROM #ECIJA