The increasing implementation of artificial intelligence (AI) in the workplace has generated intense debate about the need for regulations that protect workers' rights and ensure the ethical and responsible use of these technologies.
The European Regulation on Artificial Intelligence (RIA)
Regulation (EU) 2024/1689, adopted by the European Parliament, establishes a harmonised regulatory framework for the use of AI in the workplace. It aims to ensure that these technologies are used in a transparent, fair and non-discriminatory manner.
This regulation introduces the classification of Artificial Intelligence Systems (AIS) according to their level of risk:
- High risk: AI used in personnel selection processes , performance evaluation, task assignment, promotion or dismissal.
- Low risk: Applications with minor impact on fundamental rights.
High-risk systems must comply with stricter requirements, such as human supervision, data validation and impact assessment to minimise discrimination or rights violations.
In addition, the regulation prohibits certain practices such as:
- Subliminal manipulation that affects people's behaviour.
- Discriminatory social classification leading to unfavourable treatment.
- Prediction of crimes based on algorithmic profiling.
- Mass biometric surveillance without legal justification.
National Regulations in Spain
In Spain, the regulation of AI in the workplace has advanced with the Rider Law (Law 12/2021), which introduced the obligation to inform workers' representatives about algorithms used in workplace decision-making.
This regulation requires companies to:
- Explain the parameters and rules on which their algorithms are based.
- Inform workers about how the algorithms affect their working conditions.
- Keep this information up to date in case of changes to the algorithms.
Where there is no union representation, the company should directly inform individual workers.
Responsible Use of AI Policies
Although regulations set out basic requirements, many companies are choosing to develop internal policies for the responsible use of AI, which include:
- Training and awareness for the ethical use of AI.
- Human oversight of automated decisions.
- Impact assessments to avoid discriminatory bias.
- Data protection and privacy in the use of monitoring tools.
In case AI is used for labour monitoring of employees, the Organic Law on Data Protection and Guarantee of Digital Rights (LOPDGDD) obliges companies to establish clear criteria and consult with workers' representatives.
Challenges and Ethical Dilemmas
Despite regulations, the use of AI in the workplace still poses significant challenges:
- Algorithmic discrimination: Algorithms may perpetuate historical biases in hiring and promotion.
- Privacy and surveillance: AI tools can collect sensitive data, creating risks to workers' rights.
- Transparency and explainability: It is key that employees understand how these systems work and how they impact their employment rights.
Conclusions
The advance of AI in the workplace represents a regulatory and ethical challenge. European and Spanish regulations have taken important steps to ensure fair and responsible implementation, but oversight and proper enforcement of these laws will be key to protecting workers' rights in the digital age.
Companies must take a proactive approach to regulating AI, not only by complying with legislation, but also by implementing good practices that minimise risks and ensure respect for workers' dignity and privacy.