Digital surveillance, ideological control and the risk of criminalising thought: a legal look at 21st century authoritarian models
In 2019, The New York Times published more than 400 pages of internal Chinese government documents detailing the operation of "re-education" centres in Xinjiang[1]. What it revealed was a system that relied not so much on guards and bars, but on algorithms. Facial recognition on every corner, predictive behavioural analysis, databases that cross-referenced shopping history with mosque visits, sensors in phones that detected Koran reading.... Technology was not policing crimes; it was policing ideas. And it did so with a precision that no authoritarian regime in the 20th century had ever achieved[2].
This is not dystopian science fiction, nor is it a problem for international bodies alone. It is the present of digital surveillance, and its legal implications reach us directly. Because the same companies that developed facial recognition for Xinjiang sell "twin" technology in Latin America. Because the intelligent video surveillance systems operating today in Mexico City, Monterrey or Guadalajara use similar architectures. And because so far, no Mexican law clearly establishes what a surveillance algorithm can and cannot do with the biometric data of millions of people.
International law has something to say about this, even if it seems far away. Articles 9, 12, 18 and 19 of the International Covenant on Civil and Political Rights are not dead letters: they prohibit arbitrary detention, persecution on the basis of belief and any form of surveillance that infringes on freedom of thought. 3] The Convention against Torture imposes an absolute and unconditional prohibition that no state can derogate from, even in situations of emergency or alleged national security. In this context, when an algorithm decides who is 'suspect' on the basis of religiosity, political affiliation or ideological profiling, it is not simply surveillance: it constitutes a method that, if it overrides autonomy or seeks to systematically intimidate, could fall within the spectrum of degrading treatment or institutional abuse[4]. 4] This is a sophisticated form of preventive repression.
The European Union understood this before anyone else. The AI Act, which came into force this year, classifies real-time facial recognition in public spaces as an "unacceptable risk" except in exceptional cases and under strict judicial supervision[5]. 5] The aim is not to ban the technology, but to subject it to controls equivalent to those required to search a home or tap into a private communication. Germany went further: in 2023 its Constitutional Court annulled a predictive surveillance system in Hamburg for violating the right to free development of personality[6]. 6] The argument was simple and forceful: no one can live freely if they know that an algorithm is constantly evaluating their behaviour to predict whether they will commit a crime.
In the United States the path has been different but converges on the same point. The FTC does not have a specific AI law, but it has begun to enforce consumer protection rules to sanction misleading or discriminatory uses of algorithms. Interestingly, several states - Massachusetts[7], California[8], Illinois[9]- banned or severely restricted facial recognition in the hands of local authorities. The reason was not technological, it was constitutional: the Fourth Amendment protects against unreasonable searches, and a system that permanently scans the faces of everyone in a public space amounts to a massive and continuous search without probable cause.
Mexico, by contrast, operates in a dangerous regulatory limbo. Cities have installed thousands of cameras with facial recognition capabilities - Mexico City's C5 is one of the largest systems in Latin America - but there is no single law that regulates their use, requires transparency of the algorithms that process the data, establishes how long the images are stored or guarantees the right of individuals to know if they are being tracked. The Ley Federal de Protección de Datos Personales en Posesión de los Particulares does not apply to authorities. The General Law on Transparency requires the publication of information, but does not regulate the use of surveillance technologies. And the Constitution protects privacy, yes, but the Supreme Court has not had the opportunity to rule on whether mass automated facial recognition violates that right.
This vacuum, in addition to posing a civil liberties problem, is an operational and legal risk for any company that develops, sells or implements these technologies in Mexico. Imagine the case of a software company that provides intelligent video analysis to state governments. Today there is no clarity on what data can be processed, under what conditions, or with what technical or legal safeguards. Nor is it clear who bears responsibility when a system generates false positives or is used for purposes other than those authorised. If in the future the Court were to declare the use of certain facial recognition technologies unconstitutional, the question would inevitably arise: what would happen to existing contracts? Who would be liable for the damage caused? The developer? The contracting authority? Or both?
Serious companies are already taking notice. In Europe, providers of facial recognition technology started to include AI Act compliance clauses in their contracts, to conduct algorithmic bias audits and to thoroughly document how they train their models. In the US, some simply exited the government surveillance market because the reputational and legal risk was not worth it. In Mexico we are not there yet, but when regulation comes - and it will come, because international and social pressure will demand it - many companies will discover that they have been operating for years in a grey area that suddenly became illegal.
That is why it is urgent that this conversation reaches Congress, not as an ideological banner but as a technical necessity. Mexico needs a law that regulates the government's use of artificial intelligence in public security, that establishes the same controls that exist for other forms of intrusive surveillance: court order, proportionality, limited temporality, independent oversight. And it also needs to regulate the private side: companies that collect biometric data, that do predictive profiling, that sell behavioural analytics. Because technological authoritarianism does not always come from the state; sometimes it comes disguised as "personalisation of services" or "improving the user experience".
For lawyers and for those of us who advise technology companies, the message is clear: governance of these systems is no longer optional. Implementing facial recognition, predictive analytics or any form of automated surveillance without clear data protection protocols, without human rights impact assessments, without auditing mechanisms and without transparency on how the algorithms work is increasingly a high risk gamble. The question is not whether regulation will come, but how prepared we are to generate legislation that harmonises the technological advances and needs of society with the fundamental rights of its members.
The defence of free thought is not a luxury of prosperous societies. It is the foundation of any functional democracy. Surveillance technologies are here to stay, but their legitimacy will depend entirely on the limits we know how to impose on them. The real innovation is not what artificial intelligence can do, but what a society decides not to allow it to do. And that decision, in Mexico, is still pending.
Ricardo Chacón is a partner and director of ECIJA Mexico.
[1] The Washington Post. Uighurs and their supporters decry Chinese 'concentration camps,' 'genocide' after Xinjiang documents leaked https://www.washingtonpost.com/world/2019/11/17/uighurs-their-supporters-decry-chinese-concentration-camps-genocide-after-xinjiang-documents-leaked/
[2] PBS News. Leaked docs give inside view of China's mass detention camps https://www.pbs.org/newshour/show/leaked-docs-give-inside-view-of-chinas-mass-detention-camps
[3] United Nations - International Covenant on Civil and Political Rights https://treaties.un.org/untc/Pages/doc/Publication/UNTS/Volume%20999/volume-999-I-14668-English.pdf
[4] Equality & Human Rights Commission - Convention against Torture & other Inhuman, Degrading Treatment or Punishment. https://sthelenaehrc.org/convention-against-torture-other-inhuman-degrading-treatment-or-punishment/
[5] Biometric Update - EU issues guidelines clarifying banned AI uses. https://www.biometricupdate.com/202502/eu-issues-guidelines-clarifying-banned-ai-uses
[6] Bundesverfassungsgericht Federal Constitutional Court - Legislation in Hesse and Hamburg regarding automated data analysis for the prevention of criminal acts is unconstitutionalhttps://www.bundesverfassungsgericht.de/SharedDocs/Pressemitteilungen/EN/2023/bvg23-018.html
[7] Massachusetts M.G.L. c.6 § 220 (effective July 1, 2021) requires law enforcement agencies requesting or conducting facial recognition searches to do so upon written request, only when there is a court order or in cases of emergency.(https://www.mass.gov/doc/facial-recognition-report-september-1-2021/download)
[8] AB 1814 was passed, which seeks to prohibit a facial recognition match from being the sole basis for probable cause or for issuing a warrant.(https://sjud.senate.ca.gov/system/files/2024-06/ab-1814-ting-sjud-analysis.pdf)
[9] In Illinois there is the Biometric Information Privacy Act (BIPA), which imposes strong restrictions on the use of biometric data by private entities without consent. While not primarily aimed at law enforcement use, it is one of the most robust laws in the US on facial recognition.(https://www.aclu-il.org/en/campaigns/biometric-information-privacy-act-bipa)