April 11, 2016

Statement at United Nations CCW Expert Meeting on Lethal Autonomous Weapon Systems

By Kelley Sayler and Paul Scharre

As we enter our third year of discussions on lethal autonomous weapons, I would like to applaud states for their continued engagement on this important topic as well as offer some thoughts for consideration.

There has been an emerging view that, in addition to looking at technology, we ought to focus on the role of the human in lethal engagements. We share this perspective. Technology changes. In the field of machine intelligence, the past few years have seen rapid progress. Arguments in favor of banning weapons based on the state of technology today make little sense. What if technology improves? Could some of the same sensors and autonomy that can allow a self-driving car to avoid pedestrians be used to avoid civilian casualties in war? Perhaps. We should not preemptively close ourselves off to opportunities to decrease human suffering in war.

Humans’ obligations under IHL do not change, however. Humans are bound to ensure their actions are lawful. This imposes certain criteria on the extent to which humans must remain engaged in lethal decision-making.

Humans must have sufficient information about the weapon, the targets, and the specific context for action and sufficient time to make an informed decision that engagements are lawful. In addition, because weapons are tools in the hands of humans and not combatants themselves, humans must retain the ability to determine when those tools are no longer appropriate and engagements should cease. This does not mean real-time supervision of weapons or perfect information. That does not exist today for many weapons, such as cruise missiles. It does mean that autonomy must be bounded to ensure that failures, if they occur, do not lead to catastrophic consequences.

There is much work to be done to better understand the necessary role of human control and judgment in lethal force. Whether one uses the term “meaningful,” “appropriate,” or some other adjective, it is clear that continued human involvement in lethal force is essential. Rather than focus solely on what technology can and cannot do today, we ought to ask, if we had all the technology we could imagine, what role would we still want humans to play in lethal decision-making? What decisions require uniquely human judgment, and why?

Thanks very much for the opportunity to share these views.

  • Podcast
    • April 29, 2021
    Fire and Ice

    In this week’s edition of the SpyTalk podcast, Jeff Stein goes deep on the CIA’s looming eviction from Afghanistan with Lisa Curtis, a longtime former CIA, State Department an...

    By Lisa Curtis, Jeff Stein, Jeanne Meserve & Alma Katsu

  • Reports
    • April 28, 2021
    Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities

    Introduction An international debate over lethal autonomous weapon systems (LAWS) has been under way for nearly a decade.1 In 2012, the Department of Defense (DoD) issued for...

    By Robert O. Work

  • Podcast
    • March 25, 2021
    Are ‘killer robots’ the future of warfare?

    Paul Scharre joins host Suzanna Kianpour to discuss the technology, fears and even potential advantages of developing autonomous weapons. Listen to the full conversation from...

    By Paul Scharre

  • Commentary
    • Texas National Security Review
    • June 2, 2020
    The Militarization of Artificial Intelligence

    Militaries are racing to adopt artificial intelligence (AI) with the aim of gaining military advantage over competitors. And yet, there is little understanding of AI’s long-te...

    By Paul Scharre

View All Reports View All Articles & Multimedia