AI ACT: A New Era of Enforcement and Accountability for Artificial Intelligence
Regulation (EU) 2024/1689 of 13 June, known as the AI ACT, establishes, for the first time in the European Union, a binding legal framework for the development, placing on the market and use of artificial intelligence (AI) systems.
Thus, with a focus on risk mitigation and the protection of fundamental rights, the AI ACT establishes a framework of obligations for suppliers, users and other actors in the life cycle of AI systems, setting out an administrative offence regime, based on proportional sanctions and deterrents for violating the established rules.
The price of non-compliance: offences and fines in the AI ACT
The AI ACT defines administrative sanctions based on limits defined in the law, which reflect the seriousness of the offence and the economic size of the offending entity.
Although not expressly stated, offences against the rules set out in the AI ACT are classified as very serious, serious and light:
- Very serious infringements are those that concern practices prohibited under Article 5, such as the placing on the market of AI systems that use subliminal, manipulative or deceptive techniques to substantially influence people's behaviour, seriously impairing their ability to make informed decisions and causing them to act in a way that could cause them significant harm.
- Serious offences refer to non-compliance with general or applicable requirements for general purpose AI models, other than those set out in Article 5. This group of offences includes, for example, breaches of obligations relating to high-risk systems, including failures in conformity assessment processes, technical documentation or risk mitigation measures.
- Minor offences concern the provision of incorrect, incomplete or misleading information to the competent national authorities.
The fines imposed vary according to their classification, as follows:
- Very serious offences: fines of up to €35,000,000.00 or, in the case of a company, up to 7 % of its annual worldwide turnover in the previous financial year, whichever is higher;
- Serious offences: fines of up to €15,000,000.00 or, in the case of a company, up to 3 % of its annual worldwide turnover in the previous financial year, whichever is higher;
- Minor offences: fines of up to €7,500,000.00 or, in the case of a company, up to 1% of its total annual worldwide turnover in the previous financial year, whichever is higher.
For small and medium-sized companies, the lower of the two amounts will apply.
Who supervises? AI ACT control structure
The practical implementation of the AI ACT must be based not only on clear rules, but also on an effective and structured supervision model, which ensures that all those involved in the life cycle of artificial intelligence systems fulfil their legal obligations.
Under the regulation, each Member State will have to designate the national authorities competent to monitor compliance with the regulation and a national supervisory authority, with powers as a notifying and market surveillance authority.
These bodies will be responsible for imposing fines, investigating infringements and reporting them to the European Commission, which will have a coordinating role, ensuring a harmonised approach between member states.
In the case of Portugal, 14 sectoral bodies have been designated as competent for this purpose and have already been notified to the European Commission: Autoridade Nacional de Comunicações (ANACOM); Inspeção-Geral das Finanças (IGF); Gabinete Nacional de Segurança (GNS); Entidade Reguladora para a Comunicação Social (ERC); Inspeção-Geral da Defesa Nacional (IGDN); Inspeção-Geral dos Serviços de Justiça (IGSJ); Polícia Judiciária (PJ); Inspeção-Geral da Administração Interna (IGAI); General Inspectorate for Education and Science (IGEC); Health Regulatory Authority (ERS); Food and Economic Safety Authority (ASAE); General Inspectorate of the Ministry of Labour, Solidarity and Social Security (IGMTSSS); Working Conditions Authority (ACT); Energy Services Regulatory Authority (ERSE).
AI ACT's inspection model should combine national supervision and European coordination, seeking to ensure a balance between innovation and the protection of fundamental rights. Companies should now prepare themselves to respond quickly and transparently to the demands of these entities, namely by implementing internal compliance policies and continuous auditing.
From theory to practice - key implementation dates
The entry into force of the AI ACT marks only the beginning of a phased implementation process, carefully structured to allow progressive adaptation by the Member States, companies and other entities involved.
The next steps in implementation are as follows:
- From 2 August 2025: the rules regarding notifying authorities, general purpose AI models, governance, sanctions and confidentiality will become applicable;
- From 2 August 2026: entry into force of the remaining matters of the regulation.
- From 2 August 2027: full compliance will be mandatory for high-risk AI systems.
The AI ACT's administrative offence regime is not just a legal formality, it is a clear sign that the European Union sees artificial intelligence as a matter of public interest and the protection of fundamental rights.
The large fines for breaching the rules set out in the regulation represent a real and tangible financial risk, which already recommends the development of a real and consistent internal compliance culture. Without effective control policies and processes, it will be inevitable that organisations dealing with AI may, even unintentionally, incur infringements. Implementing internal auditing mechanisms, technical records, document control and ongoing training is not only prudent, it is essential to guarantee compliance and protect the reputation of organisations in a regulatory landscape that is becoming increasingly demanding. More than an obligation, AI compliance should be seen as a strategic investment.
Timely preparation for this new legal framework will be crucial to ensuring not only legal compliance and preventing financial risk, but also the trust of citizens and economic operators in the safe and responsible use of AI.