When technology exceeds the law
Today we use the artificial intelligence almost without realizing it. It also suggests what to look for, how to move, what to buy. But have we ever asked ourselves: who controls all of this? Who decides what an algorithm is and what is not? The technologies they run fast, the laws much less.
The theme of the the regulation of artificial intelligence has become crucial. From big tech to governments, all of which seek to establish a framework of rules. But how do you regulate something that evolves every day? And who has the right to do so?
What does it mean to regulate artificial intelligence
Regulate TO the means to define the limits, criteria and responsibilities for those who develop, use, or suffering from the effects of intelligent systems. The objective is twofold: maximize the benefits of THE and reduce the risks for citizens, businesses and society.
The standards may relate to different aspects: the transparency of the algorithms, the protection of privacy, the absence of discrimination, the responsibility in case of error. In practice, it is to avoid TO become a “space without rules”, as with a no man's land digital.
Who decides the rules?
The rules of artificial intelligence are not written by a single authority. On the contrary, are to be decided by a complex web of actors:
– The European Unionwith THE Act, the first comprehensive attempt to create an organic law on the AI.
– National governmentswith different approaches: from the rigid approach of China to the more self-regulating in the United States.
– Big Techthat often set the rules made it through the massive use of their platforms.
– International organizationslike the united nations , the OECD and the Council of Europe, which promote ethical guidelines.
– Civil society, academics, activists and citizens, who are demanding more transparency and rights.
In our article Ethics of Artificial Intelligence: why it concerns us all we have already addressed the issue of liability. But today we go over: who really has the power to impose global rules?
Ethics, power, and artificial intelligence
The problem is not only legal. It is also political and cultural. To regulate means to decide who can do what, who is protected and who can benefit. It means addressing the theme of the power of the algorithms, as explored in IA and surveillance: who protects who?.
An example? Predictive algorithms used in justice or in the credit. Who can guarantee that they are non-discriminatory? Who can access their code? And if they are wrong, who pays?
Without clear rules, artificial intelligence is likely to consolidate, to inequality, to accentuate the bias and undermine fundamental rights.
Practical examples and real cases
In Europe, the regulation AI Act provides a classification of systems based on risk, which is unacceptable (e.g. social scoring style China), high (e.g. AI for recruiting or justice), medium, and low. The high-risk systems will be subject to obligations of transparency, traceability, and control of the human.
In the USA, however, the regulation is still fragmented. Some States, like California, have laws advanced privacy (such as the CCPA), but there is no federal law only.
At the global level, theOECD has compiled a list of the guiding principles for THE responsible, while theUNESCO he has published a framework of ethics for artificial intelligence.
👉 The OECD AI Principles
👉 The UNESCO Recommendation on the Ethics of Artificial Intelligence
These documents are voluntary, but influence public policies.
Frequently asked questions (FAQ)
Regulate TO slow down innovation?
Not necessarily. A good regulation can stimulate innovation in a safe and responsible way, creating confidence in the citizens and in the markets.
The AI can be dangerous without rules?
Yes. Without adequate standards, the AI can amplify bias, violate the privacy or be used to manipulate opinions, as happens with the misinformation automated.
We can have a world law on TO?
It is difficult, but increasingly necessary. Today, the risk is that every Country in the face by themselves, with standard incompatible. International cooperation will be essential.
Conclusion: the rules in order not to lose control
The artificial intelligence is not neutral. Behind every algorithm there are choices, interests and consequences. Regulate it does not mean to block it, but lead it. And to do so, we need a constant dialogue between governments, businesses and citizens.
The questions are many: those who supervises the algorithms? How to protect the fundamental rights? We can build a democracy digital where the AI is at the service of the people?
The answer is not simple. But the time for porsela is now.
If you are interested in the legal implications, you can read our in-depth Digital justice: The Artificial Intelligence to the dock?