Introduction – The AI is everywhere: it is neutral?
In recent years, artificial intelligence has ceased to be a technology from the laboratory to enter silently in our daily life. This is not only matter by scientists and engineers: the AI decides what we see on social, filter our curriculum, driving vehicles, determine health priorities. But with all this power, decision-making is natural to ask: who determines what is just, fair, proper? Artificial intelligence is really neutral, as often we are told?
In reality, the technology reflects the intentions, the limitations and prejudices of those designs. Talk about ethics of artificial intelligence it is to address these implications, social and moral. This is not philosophy abstract, but something about the practical life of the people. All.
Before tackling the most pressing issues related to the ethics of artificial intelligence, it may be useful to clarify what we mean really for “IA”. We talked about it in this overview of getting started: What is artificial intelligence (and what isn't, really).
Bias in algorithmic – How it is created and why it is dangerous
Imagine you send a resume and be discarded even before that a human being will read. It really happened. In 2018, Amazon has withdrawn a system of automatic selection penalized the women candidates, because it was trained on historical data in which the men were overrepresented. The algorithm does not “want” to discriminate, but he did it, replicating unknowingly a pattern present in the data.
This is an example of algorithmic bias: distortions in the results of an AI system caused by the biases present in the data or in the models. The risk is huge, especially if the algorithm is used to select candidates, evaluate loan applications, and suggest sentences.
The problem is that, unlike a human, the AI has no moral awareness. If it receives the data are distorted, play – and often amplifies – these distortions. And the result may seem “objective” only because it comes from a machine.
Automated surveillance and privacy
Another urgent topic is that of surveillance. Today, many cities are using facial recognition systems for security reasons. Some governments use them to monitor the masses of citizens, limiting protests or even punish behaviors that would be considered “non-compliant”. In theory, everything is under control. In practice, it often lacks oversight transparent, and the technologies are adopted before we even discuss the ethical implications.
Facial recognition, in particular, has shown serious accuracy problems for people who are not white, leading to miscarriages of justice and discrimination is systemic. But even beyond the technical accuracy, the central question remains: do we really want to live in a society in which every movement is recorded, analyzed, and potentially punished?
Privacy is not only an individual right. Is a necessary condition for freedom. And when the AI makes it invisible to the surveillance, the boundaries between security and control become blurry.
Algorithms and decision-making responsibility
One of the promises of AI is to “make better decisions”. In part it is true: a system that can analyze more data than a human, and find correlations useful. But the point is: who is responsible when the algorithm wrong?
If an IA medical does not detect a disease, who pays? The doctor? The manufacturer of the software? The hospital? And if a machine autonomous invests a pawn? Who responds?
Today, many decision-making systems are used as supportbut, in truth, have a decisive impact. A judge may rely on an algorithm to estimate the risk of recurrence. A company can rely on an automatic scoring to decide who to hire. In both cases, the responsibility is dispersed, and the human takes refuge behind the “neutrality” of the machine.
But the algorithms are not oracles. Are social constructions and techniques, and every decision is automated, requires a clear regulatory framework, which establishes who decides what and if you are responding to what.
The issue of control and power
The ethics of artificial intelligence is not only about the errors or biases. Also concerns the power. Those who control algorithms control, in fact, the many dimensions of public and private life.
The big technology companies amass huge amounts of data and develop more and more sophisticated models, often without a real democratic oversight. Artificial intelligence today is not evenly distributed: it is a tool in the hands of a few, that they can use it to maximize profits, guide opinions, influence markets, and policy choices.
This is not to demonize the businesses, but recognize that the power of algorithmic has real consequences on democracy, the autonomy of the individual, on the distribution of opportunities. Letting that dynamic develop without critical reflection mean rule the future.
An article that is already published on The Compass of the IA, “Surveillance and Artificial Intelligence: Who Controls Whom?”, examines some of these issues, analyzing the decision-making power is moving towards systems algorithmic often invisible.
The need for a regulatory transparent
In recent years, the European Union has made progress with the so-called TO Act, the first legislative proposal in the world to regulate the use of AI in the basis of the risk. The idea is simple but powerful: the more a technology is dangerous to the fundamental rights, the more severe must be the requirements to use it.
It is an important step, but that alone is not enough. Serving a global governance, because the AI has no boundaries. Serves involvement of the public, because the rules do not can be decided only by engineers and lobbyists. And, above all, transparency: the algorithms that impact on people's lives and need to be analyzed, discussed, contested.
The regulation is not a brake on innovation. It is the condition to make it sustainable. Why the confidence, today, is the real engine of progress.
For those who wish to know more, read the analysis AlgorithmWatch on the issue of transparency, algorithmic and european policies in the course.
Conclusion – Why ethics is not a luxury
In the debate on artificial intelligence, often contrasts with the ethics of efficiency, as if worrying about the rights it meant to slow down innovation. It is a false dilemma. Without ethics, innovation is likely to create new inequalities, strengthen injustice and undermine trust in the collective.
The ethics of artificial intelligence is not a luxury for academics. It is a daily need. It is the tool that allows us to keep the human at the centre, in an increasingly automated world. And it's a conversation that concerns everyone, not only those who write the code, but who reads a story, using a smartphone or looking for work.
Do you want to deepen these issues, and stay up-to-date?
Check out the other articles in the category Ethics and society or share your opinion in the comments.
👉 Or subscribe to newsletter of La Bussola dell'IA to receive each Friday for new ideas and insights.