Does the use of bots violate the right to the protection of personal data?

Artículos2 October 2025
Risks and challenges of using bots in political campaigns

A few weeks ago, the press published a report that exposed those allegedly involved in a network of "bots" dedicated to attacking and spreading false information about presidential candidates Jeannette Jara and Evelyn Matthei on social networks. The journalistic work exposed the modus operandi of people linked to the propagation of hate messages on digital platforms and reopened the debate on the use of "fake news" in electoral campaigns.

One of the most serious implications of the use of these technologies is the ability to influence public opinion without people's knowledge, manipulating trends through programmes whose artificial origin goes unnoticed. These practices, which use artificial intelligence, big data analysis and other databases to impersonate legitimate users, generate a series of implications in terms of privacy and transparency, especially when carried out without the consent of the data subject and through automated systems that collect and process personal data.

To understand how bots work, we must understand that bots are a programme that performs repetitive, predefined and automated tasks, so they can work much faster than a person, and that, as far as social networks are concerned, these automated programmes simulate human interaction.

In this context, Law 19.628 and its amendment by Law 21.719, on personal data protection, becomes relevant, as it establishes the obligation to carry out an impact assessment on personal data protection when the processing may represent a high risk to the rights and freedoms of data subjects, either due to the technology used, the volume of data, or the type of analysis performed. This is particularly critical in cases of profiling and automated decisions that may infer or reveal aspects such as political leanings, religious beliefs or cultural preferences and, through segmentation or micro-targeting techniques, unduly influence a person's behaviour, affecting the choices they make.

At the heart of data protection is the lawful basis that enables the processing. Bots do not "have" consent: it is the controller behind the operation who must rely on it (or justify another basis). If a bot collects identifiers, performs tracking or uses/infers profiles to select or tailor messages, there is processing of personal data and a clear legal basis is required (e.g., valid consent or legitimate interest with tests and safeguards), as well as prior transparency, explicit purposes, and the right to object to profiling.

Where automation only disseminates in bulk without segmenting by personal attributes, the risk to personal data protection is lower. On the other hand, if there is micro-targeting or enrichment with data from platforms/third parties without informing the data subject, the principles of lawfulness and fairness (and also minimisation and purpose) are compromised. In short: the problem arises before the bombardment of information (in how the data is obtained, what is inferred and what it is used for) and cannot be covered without a strengthened duty of information and a solid basis of legitimacy.

If bots collect and analyse the social media interactions of users who share sensitive data, data that can lead to profiling, it stands in the way of our law that imposes an obligation to conduct a data protection impact assessment (DPA), which allows risks to be identified and mitigated before the processing is carried out. It is clear that if there is no certain knowledge of who is behind these bot profiles, there is no concern for the security of social media users' personal data.

Even the use of bots in political campaigns is not limited to the dissemination of messages or the manipulation of social media trends. Its true scope is enhanced when the information collected feeds Big Data systems, which work with large volumes of data that are generated continuously and at high speed, and are stored in massive databases that, if they do not have adequate security measures, can be vulnerable to unauthorised access, leaks or misuse.

The Cambridge Analytica case: a before and after in data protection

The Cambridge Analytica case exploded when it was revealed that the consulting firm obtained, through a Facebook app presented as a personality test, data from millions of users and their contacts (without informed consent or sufficient transparency) to build psychographic profiles (tastes, beliefs, personality traits) and micro-target political messages during the 2016 US presidential campaign. The combination of mass collection, sensitive inferences and opaque segmentation showed how seemingly trivial signals(likes, page follows, interactions) could be transformed into a tool for large-scale political manipulation.

The Cambridge Analytica case has a direct lesson for Chile: without enhanced transparency, clear grounds of lawfulness and effective controls over profiling (including inferences of likes and beliefs), any digital strategy, human or automated, can end up violating people's privacy and digital rights, eroding public trust and exposing campaigns and platforms to severe sanctions and reputational damage.

Prevention requires reporting "who processes what data, for what purpose and with what logic", labelling automated or bot-generated content, enabling opposition/opt-out to profiling, and limiting data collection and use to the stated legitimate purpose. In short: more light and less black box in digital political communication.



Ghislaine Abarca Associate of the Personal Data Protection Area of Ecija Chile.

Una esfera construida a partir de una red de triángulos interconectados.

Related partners

LATEST FROM #ECIJA