The dystopian wellbeing of algorithms... and its risks
It is widely known that one of the main reasons why the use of artificial intelligence has spread so rapidly is the real and immediate benefit that this technology brings to society at large.
There is no denying that artificial intelligence contributes to social welfare, bringing benefits to all sectors of the economy. It enables the improvement of people's quality of life, health, environmental performance or social activities. But it also carries great risks, such as opacity in decision-making, gender and other discrimination, the generation of addictive systems, intrusion into our private lives or its use for criminal purposes.
When we delegate social welfare to algorithms, the first reflection we should make is that algorithms, in and of themselves, are not and are not unethical. It is the people who design them, train them with data and define their objectives who can determine, according to their interests and/or their own biases, how they work. And those interests do not always coincide with the general interest.
The simplification of processes, the improvement of personal and professional experiences in any field; in short, the efficiency and capabilities of these algorithms can lead us to depend on them, unaware of the danger of algorithmic designs that leave out minority social realities or learning models based on asymmetric information that distort, in an apparently optimal way, decision-making.
All this can lead to undesirable dystopian situations in advanced democratic societies such as ours, societies that are made up of people. And we must not lose sight of this human approach if we do not want artificial intelligence to become a technological revolution that will make us, paradoxically, involute as a society.
On the more extreme side of this revolution, for some years now there have been voices and currents of thought that predict the transformation of the human condition through the use of advanced technologies that help to improve and overcome human limitations. This is the intellectual movement known as transhumanism, which affirms the possibility of improving the human condition by making available technologies to eliminate ageing and significantly improve human intellectual, physical and psychological capacities, with even immortality as a possibility. This begs the question: where is the limit?
It is therefore of paramount importance that artificial intelligence is built not only on efficiency or legality, but also on ethics, providing the framework of values and principles that should guide actions to improve social welfare, health, happiness and quality of life for people. It must be fair, designed in accordance with the ethical criteria established in our democratic societies and in a transparent manner, thus allowing for accountability.
Ethics must also serve as a basis for law, being an intrinsic part of it. In this context, regulations governing the use of AI must be designed to ensure the social, economic and ethical order of our society. This is what the European Regulation on Artificial Intelligence focuses on, with varying degrees of success, by regulating AI from the perspective of the risks that the uses of this technology entail for the EU's fundamental rights and values.
At least in the EU environment, and without prejudice to the regulatory complexity that dominates this jurisdiction, it seems that there is hope and that it is possible to move forward and innovate in an ethical and responsible manner. We must not let our guard down. Training, awareness-raising, awareness-raising and constant debate are key levers to continue along these lines.
Read the full article here.