In May of this year, the United Nations Convention on Certain Conventional Weapons(CCW) held the first multilateral discussions on autonomous weapons or, as activists like to colorfully refer to them, “killer robots.” Discussion was robust, serious, and thoughtful, but through it all ran a strong sense of confusion about what exactly participants were, in fact, talking about.
There are no internationally agreed-upon definitions for what an autonomous weapon is, and unfortunately the term “autonomy” itself often leads to confusion. Even setting aside the idea of weapons for a moment, simply the term “autonomous robot” conjures up wildly different images, ranging from a household Roomba to the sci-fi Terminator. It’s hard to have a meaningful discussion when participants may be using the same terminology to refer to such wildly different things. Further complicating matters, some elements of autonomy are used in many weapons today, from homing torpedoes that have been in existence since World War II to missile defense systems that protect military installations and civilian populations, like Israel’s Iron Dome. A significant amount of the discussion taking place on autonomous weapons, however, both at CCW and in other forums, often occurs without a sufficient understanding of how – and why – militaries already use autonomy in existing weapons.
In the interests of helping to clarify the discussion, I want to offer some thoughts on how we use the word “autonomy” and on how autonomy is used in weapons today. In particular, there are two overarching themes that run through much of the commentary on the issue of autonomy in weapons. The first is the notion that what we are concerned with is not weapons today, but rather potential future weapons. The second is the idea, championed by some activists, that the goal should be “meaningful human control” over decisions about the use of force. Unfortunately, some of the concepts put forward for “minimum necessary standards for meaningful control” assume a level of human control far greater than exists with present-day weapons, such as homing munitions, that are widely used by every major military today. Setting a bar for minimum acceptable human control so high that vast swathes of existing weapons, to which no one presently objects, fail to meet it almost certainly has missed the essence of what is new about autonomous weapons. Increased autonomy in future weapons raises challenging issues, and a critical first step is understanding what one could envision in future weapons that would result in a qualitatively different level of human control compared to today. In the interests of readability, I’ll cover these issues in two posts, this first one which will examine autonomy in existing weapons, and a second which will explore some implications for the debate on autonomous weapons, in particular the notion of “meaningful human control.” I hope that by explaining how autonomy is used in weapons today, and how it is not used, this can be a useful launching point for discussions among policymakers, academics, and activists alike as they grapple with the issue of autonomy and human control in weapons.