War In The Future? The Ominous Shadow of the Smart Weapons

Imagine a not too distant future, where decisions over life and death are not to be taken by humans, but by sophisticated algorithms, from artificial intelligence integrated weapon systems lethal. Sounds like science fiction, I know, but the reality is approaching with giant steps. The application of artificial intelligence in the military sector is opening new scenarios and, let's face it, rather disturbing. We're not talking about simple drones, remote-controlled, but of systems able to operate autonomously identify targets and engage the enemy without any human intervention direct.

This perspective raises a number of questions of an ethical, legal, and practices that we can not ignore. At the center of the debate is the question of responsibility. Who will be liable if a weapon stand-alone commits an error, causing collateral damage or targeting innocent civilians? The programmer? The military commander that has deployed the system? Artificial intelligence the same? Currently, the international humanitarian law is based on the principle of human responsibility in the decisions of the attack. Transfer this decision to a machine undermines the very foundations of this system.

Another crucial aspect concerns the control. We really have to retain control of these weapons once they are deployed? How can we ensure that it does not escape our control, in making decisions, unforeseen or otherwise acting in a manner contrary to our intentions? The complexity of the algorithms of artificial intelligence makes it difficult to predict with certainty the behavior in every situation. To entrust to a machine the power to kill means to be thrown into a dark source, with potentially catastrophic consequences.

Then there is the problem, anything else is secondary, of the biases inherent in the data on which these artificial intelligences are trained. If the data reflect the inequalities and discrimination in our society, there is the serious risk that the weapons the autonomous inherit and to amplify these biases. Imagine a facial recognition system that works less well with certain ethnic groups, or an algorithm for the identification of threats that binds to certain demographic characteristics with an increased level of danger. The risk of discrimination in algorithmic contexts of war is very real, and terribly disturbing. These biases in the data, these forms of discrimination, algorithmic, can lead to decisions that are unjust and tragic on the field of battle.

The debate on guns autonomous is anything but academic. Several military powers are investing considerable resources in the research and development of these technologies. The arms race of the future has already begun, and the risk is that the logic of deterrence and competition prevail on the prudence and ethical reflection. We must prevent the technological innovation we drag in a spiral uncontrollable, where the decisions on war and peace are delegated to machines, devoid of conscience and empathy.

It is crucial to promote an international dialogue open and inclusive, involving governments, scientists and ethics experts, civil society organisations and the general public. We need to define clear limits and binding to the development and use of weapons, autonomous, before it is too late. The stakes are too high to allow us to remain inert. The future of war, and perhaps humanity itself, depends on the choices that we make today.

This is not to stop the progress of technology, but direct it in a responsible and aware. The artificial intelligence has the potential to bring extraordinary benefits in many fields, but its application to weapons requires a reflection particularly serious and thorough. We must ensure that the technology remains at the service of humanity, and it does not become a threat to our very existence. Risk awareness and mobilization for a more secure future are the responsibility of all of us. We can't let the ominous shadow of the weapons the autonomous obscure our future.

📚 Do you want to learn Artificial Intelligence?

Discover our fundamental articles, ideal for starting or orient themselves in the world of AI:

📬 Get the best every Friday

Visit the page Subscribe to our newsletter and choose the version you prefer (English or Italian).

Leave a Comment

en_US