Companies are making progress in the implementation of corporate AI plans to improve governance and reduce risks

Articles21 January 2026
The entry into force of the European AI Regulation and the growing role of unions place social and labour compliance at the centre of AI adoption strategy.

From August 2026, unions will require companies to comply with these plans to a greater extent.

An AI policy should be part of the overall governance and AI strategy of the company. Companies that allow the use of AI tools in the workplace should: audit the current use of AI tools; determine whether employees can use AI tools and to what extent.


The EU AI Act establishes a clear timeline: starting in February 2025, practices incompatible with democratic values are prohibited; and in August 2026, obligations for high-risk systems come into force. At the same time, it raises the need for companies and organizations to promote AI literacy, which, according to regulation, is defined as the set of knowledge, skills, and understanding necessary to critically assess the effects of AI and fulfil legal obligations. Training helps obtain this knowledge.


According to the experts consulted in this report, fostering a culture of innovation encourages employees to embrace change, explore new ideas, and participate in the adoption process of AI. The creation of this culture begins with leadership that promotes openness, creativity, and curiosity, and encourages teams to consider how AI can generate value and improve operations. Leadership can foster an innovation-friendly mindset by communicating a clear vision of the role of AI in the organization, explaining its potential benefits, and addressing common fears.


Various challenges to face

For Raúl Rojas, labour partner at ECIJA, “the implementation of artificial intelligence systems (AIS) in the workplace not only poses technological challenges but also clear social and labour compliance challenges that many organizations still have not internalized. Recent experience shows us that the lack of internal control in the design, implementation, and oversight of AI tools and systems can have a direct impact on the fundamental rights of workers, particularly regarding equality, non-discrimination, and privacy, and opens the door to new types of litigation within organizations.”


The European AI Regulation (RIA), mentioned above, emphasizes the obligation of companies to ensure that their personnel have sufficient knowledge about the functioning, limitations, and risks of these systems, through the obligation to provide training in “literacy” about the uses and risks of AI. However, from a social and labour compliance perspective, this requirement must be understood as part of a management system that requires risk identification, the establishment of controls, and the documentation of decisions made and their explainability throughout the entire life cycle of the algorithmic system,” he comments.


In his opinion, “despite this, in many cases, key departments such as Human Resources and Regulatory Compliance continue to be excluded from AI projects, which increases the likelihood of non-compliance and labour disputes arising from automated decisions or the lack of transparency in the obligation to provide algorithmic information to the legal representatives of workers. In fact, one of the most obvious risk factors appears in selection processes, performance evaluation, salary setting, productivity analysis, or layoffs, where algorithms are being used that companies do not always audit or explain adequately.”


Rojas points out that “when these systems affect working conditions, job continuity, ordinary labour management, or labour control that arise from the misuse or abuse of AI systems, different regulatory obligations emerge that must be understood in a coordinated manner.”


He highlights “the case, for example, of the obligation to inform the legal representatives of workers about the use of algorithms with or without AI when they affect decision-making that may impact working conditions, access to and maintenance of employment, including profiling (art. 64 ET); or the obligation to hold a hearing and participate in cases where AI tools are integrated into corporate digital devices, which is becoming increasingly common, in order to enable labour control or establish clear criteria for acceptable uses of this technology in the company (art. 87 LOPDGDD).


For this legal expert, “ultimately, the implementation of AI systems in companies, especially when it affects or may affect labour rights, cannot be managed as an exclusively technological project. They must be treated as a transversal element of the organization’s strategy in terms of digital transformation and people management, with a direct impact on governance, fundamental rights, labour relations, and the prevention of the legal risks that companies face in the face of this emerging technology.”


Read the full article published in Law and Trends, with the participation of our partner Raúl Rojas, here.

Una escalera en espiral con personas caminando en diferentes niveles.
  • Artificial Intelligence

Related partners

LATEST FROM #ECIJA