Building a lethal autonomous weapon is easier than building a self-driving car. A new treaty is necessary
Beginning in 2014, the High Contracting Parties of the Convention on Certain Conventional Weapons (CCW) have held meetings at the United Nations in Geneva to discuss possible limitations on the development and deployment of lethal autonomous weapons systems (AWS). In November 2017, the CCW convened a formal Group of Governmental Experts (GGE), chaired by India’s Ambassador to the UN Amandeep Singh Gill, with a mandate to “assess questions related to emerging technologies in the area of lethal autonomous weapons systems.” This article reflects views shared by a great many in the artificial intelligence community. These views were expressed in an open letter on July 28, 2015, signed by over 3,700 AI researchers, and in a letter to the Obama administration written on April 4, 2016, by 41 leading American AI researchers, including almost all of the living presidents of AAAI, the main professional society for artificial intelligence. The British AI community sent a similar letter to then Prime Minister David Cameron.
The UN defines autonomous weapons as having the capacity to “locate, select and eliminate human targets without human intervention.” Some have proposed alternative definitions – for example, the UK Ministry of Defence says that autonomous weapons systems must “understand higher-level intent and direction” and “are not yet in existence and are not likely to be for many years, if at all.”
Much of the discussion at the UN has been stymied by claims that autonomy is a mysterious, indefinable property. In the view of the AI community, the notion of autonomy is essentially unproblematic in the context of lethal weapons, which is quite distinct from the philosophical context of human autonomy. The autonomy of lethal weapons is no more mysterious than the autonomy of a chess program that decides where to move its pieces and which enemy pieces to eliminate. The key is that the specific targets are not identified and approved – either in advance or at the time of detection – according to human judgment, but are instead selected by an algorithm based on sensory input the algorithm receives after the mission is initiated by a human.
The feasibility of autonomous weapons is also not in question, at least for a broad class of missions that might currently be contemplated. All of the component technologies – flight control, swarming, navigation, indoor and outdoor exploration and mapping, obstacle avoidance, detecting and tracking humans, tactical planning, coordinated attack – have been demonstrated. Building a lethal autonomous weapon, perhaps in the form of a multi-rotor micro-unmanned aerial vehicle, is easier than building a self-driving car, since the latter is held to a far higher performance standard and must operate without error in a vast range of complex situations. This is not “science fiction.” Autonomous weapons do not have to be humanoid, conscious and evil. And the capabilities are not “decades away” as claimed by some countries.
UN Special Rapporteur Christof Heyns, Human Rights Watch, the International Committee of the Red Cross and other experts have expressed concerns about the ability of autonomous weapons to comply with provisions of international humanitarian law regarding military necessity, proportionality and discrimination between combatants and civilians. Discrimination is probably feasible in most situations, even if not perfectly accurate. However, determining proportionality and necessity is most likely not feasible for current AI systems and would have to be established in advance with reasonable certainty by a human operator for all attacks the weapons may undertake during a mission. This requirement would therefore limit the scope of missions that could legally be initiated.
Another important component of international humanitarian law is the Martens Clause, according to which “the human person remains under the protection of the principles of humanity and the dictates of public conscience.” In this regard, Germany has stated that it “will not accept that the decision over life and death is taken solely by an autonomous system” while Japan “has no plan to develop robots with humans out of the loop, which may be capable of committing murder.” BAE Systems, the world’s second-largest defense contractor, has asserted that it has no intention of developing autonomous weapons, stating that the removal of the human from the loop is “fundamentally wrong.”
At present, the broader public has little awareness of the state of technology and the near-term possibilities, but this will presumably change if the killing of humans by autonomous robots becomes commonplace. At that point, the dictates of public conscience will be very clear, but it may be too late to follow them.
Compliance with international humanitarian law, even if achievable, is not sufficient to justify proceeding with an arms race involving lethal autonomous weapons. President Obama:
“I recognize that the potential development of lethal autonomous weapons raises questions that compliance with existing legal norms – if that can be achieved – may not by itself resolve, and that we will need to grapple with more fundamental moral questions about whether and to what extent computer algorithms should be able to take a human life.”
One of the “fundamental moral questions” is the effect of autonomous weapons systems on the security of member states and their peoples. On this matter, the message of the AI community, as expressed in the letters mentioned above, has been clear: Because they do not require individual human supervision, autonomous weapons are potentially scalable weapons of mass destruction; an essentially unlimited number of such weapons can be launched by a small number of people. This is an inescapable logical consequence of autonomy. As a result, we expect that autonomous weapons will reduce human security at the individual, local, national and international levels.
It is estimated, for example, that roughly one million lethal weapons can be carried in a single container truck or cargo aircraft, perhaps with only 2 or 3 human operators rather than 2 or 3 million. Such weapons would be able to hunt for and eliminate humans in towns and cities, even inside buildings. They would be cheap, effective, unattributable and easily proliferated once the major powers initiate mass production and the weapons become available on the international arms market. For the victor they would have advantages over nuclear weapons or carpet bombing: they leave property intact and can be applied selectively to eliminate only those who might threaten an occupying force. Finally, whereas the use of nuclear weapons represents a cataclysmic threshold we have – often by sheer luck – avoided crossing since 1945, there is no such threshold with scalable autonomous weapons. Attacks could escalate smoothly from 100 casualties to 1,000 to 10,000 to 100,000.
The considerations of the preceding paragraph apply principally to weapons designed for ground warfare and anti-personnel operations, and are less relevant for naval and aerial combat. It is still the case, however, that “to entrust a significant portion of a nation’s defense capability in any sphere to autonomous systems is to court instability and risk strategic surprise.” Autonomous weapons in conflict with other autonomous weapons must adapt their behavior quickly, or their predictability will lead to defeat. This adaptability is necessary but makes autonomous weapons intrinsically unpredictable and thus difficult to control. Moreover, the strategic balance between robot-armed countries can change overnight due to software updates or cybersecurity penetration. Indeed, a nation’s autonomous weapons might be turned against its own civilian population. With no possibility of attribution to an external adversary or individual, one can imagine that the nation’s government would be less popular after such an event. Finally, the possibility of an accidental war – a military “flash crash” involving spiraling and unpredictable high-speed interactions among competing algorithms – cannot be discounted.
It seems likely that pursuing an arms race in lethal autonomous weapons would result in a drastic and probably irreversible reduction in international, national, communal and personal security. The only viable alternative is a treaty that limits the development, deployment and use of such weapons and prevents the large-scale manufacturing that would result in wide dissemination of these scalable weapons.
This argument parallels that used by leading biologists to convince US Presidents Lyndon Johnson and Richard Nixon to renounce America’s biological weapons program. This in turn led to the drafting by the United Kingdom of the Biological Weapons Convention and its subsequent adoption. I think we can all be glad that those steps were taken.
STUART RUSSELL
is a computer science professor and the Smith-Zadeh Professor in Engineering an the University of California, Berkeley.