UN Meeting of Experts on Lethal Autonomous Weapons Systems in Geneva

Published on: Author: Ruth O'Donnell Leave a comment

Geiss_portraitLast week Robin Geiss, Professor of International Law and Security at the University of Glasgow, addressed the UN Meeting of Experts on Lethal Autonomous Weapons Systems in Geneva (11-15 April 2016.)

Autonomous weapons systems (“combat robots”) are no longer a matter of science fiction. Various states are striving to develop such systems and already today different weapons systems have autonomous modes. Autonomous weapons systems promise important strategic gains. Unlike human soldiers they do not tire, they know neither anger nor fear and they can process the increasing amount of data and information that sophisticated military actors are grappling with in modern armed conflicts much quicker than human beings ever could.

At the same time these weapons systems raise fundamental ethical and legal questions. Could a robot ever be entitled to kill a human being? Who is to be held accountable if something goes wrong while a robot is deployed on a combat mission? Last week’s UN meeting aimed to deepen the understanding of these weapons systems with a view to (a possible) future international regulation.

Professor Geiss was invited to share his expertise in the session on “Challenges to International Humanitarian Law (IHL)”, in particular with regard to the issue of accountability in cases were robots violate the rules of international humanitarian law. Traditional accountability models are typically premised on some form of control and/or foreseeability. Higher levels of autonomy in weapons systems, however, mean lower levels of control and foreseeability. Accordingly, the more autonomous a (weapons) system is, the more difficult will it be to establish accountability on the basis of traditional accountability models. This challenge exists with regard to civil uses of autonomous technology (e.g. self-driving cars) in the same way that it exists for military uses of autonomous systems.

This, however, does not mean that there is an inevitable or insurmountable “accountability gap”. Especially in the area of state responsibility – the conceptual challenges are greater when focusing on individual criminal responsibility – accountability challenges can be overcome by way of regulation and clarification of existing laws. Professor Geiss said that “there is no conceptual barrier for holding a state (or individual human being) accountable for wrongful acts committed by a robot or for failures regarding risk minimization and harm prevention” and that “there is no need to devise a new legal category of “e-persons” or “virtual legal entities”.

A State that benefits from the various (strategic) gains associated with this new technology (i.e. a State that deploys a robot on a military mission) should be held responsible whenever the (unpredictable) risks inherent in this technology are realized. On the basis of this rationale a State could be held responsible for failures regarding risk prevention and harm reduction at the pre-deployment stage as well as for specific (wrongful) actions of the autonomous weapons system during deployment. Against this backdrop, Professor Geiss urged States to put more emphasis on the identification and specification of detailed (due diligence) obligations aiming at risk prevention and harm reduction. After all prevention is always better than cure. Common Article 1 of the four Geneva Conventions requires States to ensure respect for the laws of armed conflict in all circumstances. In other words, a legal basis for risk mitigation obligations already exists. What is needed now is a better understanding as to what exactly the obligation to ensure respect for the laws of armed conflict means in relation to autonomous weapons systems.

At another event regarding autonomous weapons systems, which was held at NATO HQ in Brussels on 18 April, Professor Geiss emphasized that “approaches to international regulation of autonomous weapons systems should be based on the assumption that critical decisions, i.e. decisions that relate to the actual targeting process and that concern important legal interests such as the right to life and the right to physical integrity, should not, for legal and ethical reasons, be delegated to fully autonomous systems. Decisions over life and death must always be subject to the control of a human being.”

~ Robin Geiss

Leave a Reply

Your email address will not be published. Required fields are marked *