Robots at war: What if they start thinking for themselves?
All the major military powers have been preparing for this. Everybody knows that after gunpowder and the nuclear bomb, robots will be the next military revolution. But this revolution will take us even further.
On May 30, special rapporteur Christof Heyns told the UN Human Rights Council in Geneva that the use of lethal autonomous robotics (LARs) “would entail not merely an upgrade of the kinds of weapons used, but also a change in the identity of those who use them.” In other words: “The distinction we make between weapons and warriors risks becoming blurred, as the former would take autonomous decisions about their own use.”
We are staring into an abyss of uncertainty. Firstly, from a legal aspect: who would be held responsible if these robots failed to respect the legalities of war? The commanding officer – assuming that he or she is human? The developer who invented the robots’ software? The manufacturer? The robot itself? But most importantly, beyond the use of non-human fighters, the UN expert asks a frightening question: are we ready to live in world where “machines are given the power to kill humans?”
Heyns urged the Human Rights Council to call on all countries “to declare and implement national moratoria on the production, assembly, transfer, acquisition, deployment and use of LARs, until a framework on the future of LARs has been established.” He invited the Council to convene an international panel to articulate this framework and reflect on the consequences of using LARs. LARs should not be confused with drones. Drones do not need a pilot to intervene in a war zone, but they are always controlled remotely by one or more soldiers. Every decision to open fire is made by a human. It is often taken after having being discussed with other people, who weigh the pros and cons and analyze a specific situation.
Confusing civilians with armed combatants
With LARs, this will not be the case. Military top brass are looking for ways to avoid human intervention. Robots could be used anywhere, anytime, and could ideally bring casualties down to zero for the army that uses them. They can also be produced in large quantities and are ideal for any army. However, there are many risks. One of them is that these robots might not be able to understand human body language. They will not bother with cultural questions. They could mistake a civilian with his hands up with a combatant pulling out a gun.
The UN rapporteur is not the only one who is worried. A group of organisations led by Human Rights Watch is trying to push for an outright ban. “It is possible to halt the slide toward full autonomy in weaponry before moral and legal boundaries are crossed, but only if we start to draw the line now,” said HRW arms director.
So is it possible to launch a campaign against weapons that do not exist yet? “This wouldn’t be the first time,” he says. “There was a successful campaign in 1995 to ban blinding lasers. Prototypes had been made, but military executives around the world came to the conclusion that the world was better off without such weapons.” As they did in 1995, the NGOs are hoping to be able join forces with a core group of highly motivated states, and to collaborate with the International Committee of the Red Cross (ICRC), which they hold as a key player.
During their numerous meetings to convince diplomats in Geneva, NGOs draw attention to the things we still don’t know in regard to the use of these machines in complex war situations. Would a robot have some sense of “proportionality” when opening fire, in accordance with the Geneva Convention?
One of Christof Heyns’s co-workers brings a dissenting voice in the debate: “The issue is complex. Some experts claim that sometimes, removing the human factor - and feelings like panic and the thirst for revenge - could actually be a good thing.” In a way, removing humans from the battlefield could make war more humane...
Le Temps, Worldcrunch / The Interview People