The European Responsibility to Act Against Killer Robots

  • Simone Neads
  • 12.4.2019 18:00

The use of artificial intelligence and autonomous robots in warfare always brings a picture of the Terminator to mind, some scary out of control robot killing humans to achieve its objective. While this image is far from reality, the use of AI and autonomous robots in war is a real concern. The UN Convention on Certain Conventional Weapons met last week to discuss the issue of whether the use of killer robots should be banned, but not all countries could agree on the issue. Functional Lethal Autonomous Robots are just around the corner, and the European Union might be the only institution that could stop them.

At the UN level action against lethal autonomous robots has been stalled by a few key countries: Russia and the US as well as Israel, Australia, and the UK. The UK is arguing that there should be freedom to conduct research, and any complete ban now would stop many life-saving technologies from being created. The UK recently invested £2.5 million into developing drone swarms that would be controlled with AI. While the UK says they have no intention of making these bots have an autonomous killing capacity, some experts warn that making the change to full autonomy would be very easy.

The United States and Russia are the largest opponents to any deal. The two countries seem to be in an arms race to develop effective battle-ready AI. Putin was famously quoted saying that the country that leads in AI will become ruler of the world, and it seems like many military strategists agree with this consequential view of AI. Military technological development is a large part of US strategy, and they understandably don’t want to be left behind in a new military revolution.

 

Three thousands of Google employees signed a protest letter that forces the company to not renew its contract with the military AI – Project Maven. 

 

On the other side of the argument is an impressive list of civil society and technology experts. Mary Wareham of the Human Rights Watch is the global coordinator of the Stop Killer Robots campaign. Wareham previously successfully campaigned against landmines and cluster munitions, she sees the development of weapons that can kill without a human command to be a substantial moral and legal problem. She argues that if countries could go to war and not lose any of their own soldiers’ lives, the threshold for war would be lowered. There is also no clear answer of who would be held accountable for the robot’s moral decisions. António Guterres, the current Secretary-General of the UN, agrees, saying “Machines with the discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.

The tech industry is also speaking up against AI and warfare. Three thousands of Google employees signed a protest letter that forces the company to not renew its contract with the military AI – Project Maven. Even top AI experts such as Geoffrey Hinton and Yoshua Bengio who won the Turing prize for AI support legal action against killer robots. Their concern is that technology is very flawed and can make critical mistakes, something that we would not want in lethal technology. They warn this is not an issue for the future because the necessary technology already exists.

 

With the US and Russia blocking any legal framework through the UNCCW, perhaps the EU is the only institution that could take the lead and create strong legislation against Lethal Autonomous Weapons. 

 

However, with the US and Russia blocking any legal framework through the UNCCW, perhaps the EU is the only institution that could take the lead and create strong legislation against Lethal Autonomous Weapons. Even if it would not be binding for countries like the US and China that are still developing the technology, it would be the first step in changing the conversation and norms around the technology. The only way that other problematic weapon technologies like landmines were finally outlawed was through international pressure.

Europe is currently at a crossroads of either being proactive or being left behind. The EU can wait as the great powers develop military technology with moral and legal uncertainty, meaning the only choice will be to react once the lethal technology is in use. Or Europe could, for once, be the leader in creating the definitions and frameworks that shape the future. This scenario, however, seems rather unlikely since the EU has proved to have a hard time finding a consensus and taking action with a global impact.

About author: Simone Neads

Partners

Tento web používá k analýze návštěvnosti soubory cookie. Používáním tohoto webu s tím souhlasíte. Další informace