Artificial intelligence and its impact on the world order
The use of artificial intelligence (AI) for geostrategic purposes is now a well-established reality in various recent international scenarios. Its application is not limited to the military sphere, where it is used for planning and executing highly precise operations, but also extends to the information sphere, through the generation of content capable of influencing the context preceding a conflict and the perception of the various actors involved.
The relevance of AI is undeniable, given its impact on the global economy and the profound transformation it is driving in production and organizational processes, both in the public and private sectors. This scenario explains why major powers are concentrating their efforts on developing and controlling this technology, as well as securing access to the necessary resources and raw materials to maintain it.
This explains the growing interest in rare earths, in a scenario of international competition for technological leadership. The data centers that support AI systems require rare earth elements (REEs), a set of 17 metallic elements —the 15 lanthanides, along with scandium and yttrium—, which, although not scarce in absolute terms, are very geographically localized and complex to extract and purify. These circumstances give them a high strategic value.
The major technology companies aspire to develop AI in regulatory environments that do not impose excessive restrictions, arguing the need to avoid competitive disadvantages. In this regard, the proliferation of state regulations on AI in certain U.S. states has created tensions with the more flexible regulatory frameworks of other countries.
Although voices have emerged in the United States warning of the need to establish a regulatory framework that delineates the uses of AI and mitigates its risks, these concerns have not prevented the signing, in December, of an executive order that, invoking national security reasons, limits the ability of states to regulate AI independently. The commitment is to a unified federal regulation that ensures the achievement of certain strategic objectives.
This approach invites reflection on whether appropriate priority will be given to principles such as transparency, accountability for algorithmic decisions, risk mitigation, and the promotion of responsible innovation. These are elements that inspired the state regulations approved so far and align with the European regulatory framework on AI, as established in the recently passed AI Act, based on risk control and the protection of individuals from the inappropriate uses of technology.
Everything points to a deepening competition in the development and use of AI in many areas, including military and intelligence, in a context of limited regulatory control globally. This situation could place the EU and its protective regulatory framework at a potential competitive disadvantage, the consequences of which are difficult to anticipate.
This is not only an economic issue but also a challenge with implications for the security and defense of states, which is crucial for their geopolitical positioning and the role they will play in the future international scenario.
All of this places the European Union in a complex dilemma : it cannot afford to be left behind in this technological and strategic challenge, but it also cannot renounce the protection of fundamental rights and the privacy of citizens. It is essential to find a balance that allows progress in the development and implementation of artificial intelligence, while respecting the legal framework that protects individuals.
Article by Javier López, partner at ECIJA, published in Cinco Días.