The weapons the autonomous decide life and death without human control
Imagine a not too distant future, where decisions over life and death are not to be taken by humans, but by sophisticated algorithms, from artificial intelligence integrated weapon systems lethal. Sounds like science fiction, I know, but the reality is approaching with giant steps.
The application of artificial intelligence in the military sector is opening new scenarios and, let's face it, rather disturbing. We're not talking about simple drones, remote-controlled, but of systems able to operate autonomously identify targets and engage the enemy without any human intervention direct. A transformation that raises fundamental questions on the future of humanity.
What really are the arms of the autonomous
The weapon systems of autonomous lethal (LAWS β Lethal Autonomous Weapon Systems) represent the most advanced evolution of artificial intelligence applied to the military field. These systems are divided generally into two categories: the semi-automatic βhuman in the loopβ, which may not work without human intervention, and those fully automated βhuman out of the loopβ, which, once activated, they operate in complete independence.
The difference is crucial. While a Predator drone always requires a human operator to decide when to shoot, weapons, autonomous of the new generation can identify, track and attack targets, based exclusively on their own algorithms. As we explored in our article on AI on a leashthe question of the control of intelligent machines become more and more central.
Currently, the full automation is more a vision than a reality of the service, but many countries are investing heavily in this direction. The conflict in Ukraine has been a tour of crucial evidence, showing how drones and autonomous systems, AI can significantly improve the military effectiveness.
Artificial intelligence transforms the modern war
The arms race of the future has already begun. The United States has allocated $ 1.8 billion for the development of AI in the military in 2024, with roughly 685 active projects. China and Russia are developing increasingly sophisticated systems. According to a report of the The Stockholm International Peace Research Institute (SIPRI), the global investments in weapon systems, autonomous exceeded $ 12 billion in 2024, with a growth of 35% compared to the previous year.
A concrete example is the system of Lavender in the israeli army, which analyzes massive amounts of data to identify potential targets. According to military sources, it can detect up to 100 targets per day, a number significantly higher than the traditional methods. The system of Gospel produces recommendations automated strategic objectives, while in Ukraine the company Palantir provides tools for data analysis for the rapid identification of enemy targets.
OpenAI has recently removed the prohibition of the use of the military by his models, working now with Anduril to provide systems that are anti-drone intelligent Defense. Also the Meta has made available its model Llama for applications to national security. As we analyzed in our study of the algorithmic biasthese systems inherit inevitably the imperfections of the data on which they are trained.
The responsibility in the era of the deadly machines
This perspective raises a number of questions of an ethical, legal, and practices that we can not ignore. At the center of the debate is the question of responsibility. Who will be liable if a weapon stand-alone commits an error, causing collateral damage or targeting innocent civilians? The programmer? The military commander that has deployed the system? Artificial intelligence the same?
Currently, the international humanitarian law is based on the principle of human responsibility in the decisions of the attack. Transfer this decision to a machine undermines the very foundations of this system. The The European Parliament he pointed out that the systems that are enabled in the IA should enable human beings to exercise significant control, while maintaining the responsibility for using.
The complexity of the algorithms of artificial intelligence makes it difficult to predict with certainty the behavior in every situation. To entrust to a machine the power to kill means to be thrown into a dark source, with potentially catastrophic consequences. As discussed in our article on ethics of artificial intelligencethe question of human control becomes ever more pressing.
Bias algorithmic and discrimination of war
A particularly troubling aspect relates to biases in the data on which these artificial intelligences are trained. If the data reflect the inequalities and discrimination in our society, there is the serious risk that the weapons the autonomous inherit and to amplify these biases.
Imagine a facial recognition system that works less well with certain ethnic groups, or an algorithm for the identification of threats that binds to certain demographic characteristics with an increased level of danger. The risk of discrimination in algorithmic contexts of war is very real, and terribly disturbing.
The recent condemnation of the UN's use of AI by Israel in the Gaza Strip highlights these risks. More than 15,000 civilian casualties in the first six weeks following the October 7, 2024, when the artificial intelligence systems have been widely used for the selection of the targets, raise fundamental questions about the accuracy and ethics of these systems.
As we explored in our article on surveillance and artificial intelligencethe control algorithm can easily be turned into systematic oppression.
Key points to remember
- The weapons are autonomous and can decide for themselves whom to attack without human intervention, direct, undermining the principle of human responsibility
- The arms race, IA has already begun with a high investment on the part of the superpowers in the world
- The bias algorithmic may cause discrimination lethal amplifying human prejudices in contexts of war
- The human control significant remains essential to comply with international humanitarian law and to prevent abuse
Frequently asked questions
The weapons autonomy, are already operating today? The semi-autonomous are already used in various conflicts, but the weapons fully autonomous remain in an advanced development phase. The border between automation and human control, there is thinning quickly.
There is a regulation of the international arms autonomous? Currently, the UN Convention on certain conventional weapons includes references limited. Several organizations, such as Stop Killer Robots, are pressing for a ban full of weapons autonomous lethal.
How can you ensure human control in the decisions lethal? The European Parliament requires that the IA will allow human control significant, but define βsignificantβ in contexts of war remains an open challenge.
What are the main risks of the weapons autonomous? The loss of human control, escalation, uncontrolled conflict, discrimination, algorithmic, and violation of international humanitarian law.
Towards a responsible future
The debate on guns autonomous is anything but academic. The logic of deterrence and competition is likely to prevail on the prudence and ethical reflection. We must prevent the technological innovation we drag in a spiral uncontrollable, where the decisions on war and peace are delegated to machines, devoid of conscience and empathy.
It is crucial to promote an international dialogue open and inclusive, involving governments, scientists and ethics experts, civil society organisations and the general public. As pointed out in our article on bioethics and artificial intelligence, we must define clear limits and binding to the development and use of weapons, autonomous, before it is too late.
This is not to stop the progress of technology, but direct it in a responsible and aware. The artificial intelligence has the potential to bring extraordinary benefits in many fields, but its application to weapons requires a reflection particularly serious and thorough. The stakes are too high to allow us to remain inert. The future of war, and perhaps humanity itself, depends on the choices that we make today.