July 09, 2014

Autonomy, “Killer Robots,” and Human Control in the Use of Force – Part I

By Paul Scharre

In May of this year, the United Nations Convention on Certain Conventional Weapons(CCW) held the first multilateral discussions on autonomous weapons or, as activists like to colorfully refer to them, “killer robots.” Discussion was robust, serious, and thoughtful, but through it all ran a strong sense of confusion about what exactly participants were, in fact, talking about.

There are no internationally agreed-upon definitions for what an autonomous weapon is, and unfortunately the term “autonomy” itself often leads to confusion. Even setting aside the idea of weapons for a moment, simply the term “autonomous robot” conjures up wildly different images, ranging from a household Roomba to the sci-fi Terminator. It’s hard to have a meaningful discussion when participants may be using the same terminology to refer to such wildly different things. Further complicating matters, some elements of autonomy are used in many weapons today, from homing torpedoes that have been in existence since World War II to missile defense systems that protect military installations and civilian populations, like Israel’s Iron Dome. A significant amount of the discussion taking place on autonomous weapons, however, both at CCW and in other forums, often occurs without a sufficient understanding of how – and why – militaries already use autonomy in existing weapons.

In the interests of helping to clarify the discussion, I want to offer some thoughts on how we use the word “autonomy” and on how autonomy is used in weapons today. In particular, there are two overarching themes that run through much of the commentary on the issue of autonomy in weapons. The first is the notion that what we are concerned with is not weapons today, but rather potential future weapons. The second is the idea, championed by some activists, that the goal should be “meaningful human control” over decisions about the use of force. Unfortunately, some of the concepts put forward for “minimum necessary standards for meaningful control” assume a level of human control far greater than exists with present-day weapons, such as homing munitions, that are widely used by every major military today. Setting a bar for minimum acceptable human control so high that vast swathes of existing weapons, to which no one presently objects, fail to meet it almost certainly has missed the essence of what is new about autonomous weapons. Increased autonomy in future weapons raises challenging issues, and a critical first step is understanding what one could envision in future weapons that would result in a qualitatively different level of human control compared to today. In the interests of readability, I’ll cover these issues in two posts, this first one which will examine autonomy in existing weapons, and a second which will explore some implications for the debate on autonomous weapons, in particular the notion of “meaningful human control.” I hope that by explaining how autonomy is used in weapons today, and how it is not used, this can be a useful launching point for discussions among policymakers, academics, and activists alike as they grapple with the issue of autonomy and human control in weapons.

Read the full piece at Just Security. 

Read Autonomy, “Killer Robots,” and Human Control in the Use of Force – Part II" here. 

  • Podcast
    • January 16, 2020
    Episode 26 - Paul Scharre

    What are autonomous weapons systems? How are they used in modern warfare? And how do we strengthen international cooperation? In this episode, the Director of the Technology a...

    By Paul Scharre

  • Podcast
    • January 15, 2020
    Robots That Kill

    By Paul Scharre

  • Transcript
    • November 7, 2019
    Transcript from CNAS Report Launch Event: "Securing Our 5G Future"

    On November 7, the CNAS Technology and National Security Program hosted a launch event for the Securing Our 5G Future report. We are pleased to share the transcript of this ev...

    By Martijn Rasser, Elsa B. Kania & Rob Strayer

  • Podcast
    • March 12, 2019
    CNAS Tech: How (Not) to Talk About AI & Lethality

    The U.S. Army recently announced its new Advanced Targeting & Lethality Automated System, or ATLAS program. The announcement generated concern and media headlines about the le...

    By Paul Scharre, Kara Frederick & Megan Lamberth

View All Reports View All Articles & Multimedia