April 15, 2015

Statement to the UN Convention on Certain Conventional Weapons on Meaningful Human Control

By Kelley Sayler

The concept of “meaningful human control” has been repeatedly raised during this week’s discussions.  Many countries have said they support meaningful human control, while others have said they would like to explore the idea further. Still others have said that the concept of “meaningful human control” is vague and unhelpful.

As part of our ongoing research on autonomous weapons at the Center for a New American Security, we have examined the concept of meaningful human control, and while we did not originate this concept, we would like to offer some ideas for consideration, particularly with regard to how control is exercised in weapons today.

If the international community is concerned about a future in which autonomy might cause us to lose human control – whether the nature of that control is considered to be meaningful, effective, adequate, etc. – we thought it would be useful to understand what might be different between the weapons of today and future, lethal autonomous weapons.

We find that meaningful human control in weapons today has three essential components:

First, human operators make informed, conscious decisions about the use of weapons.

Second, human operators have sufficient information to ensure the lawfulness of a particular action, given what they know about the target, the weapon, and the action’s context.

Third, the weapon is designed and tested in a realistic operating environment, and human operators are properly trained to ensure effective control over the use of the weapon.

These standards of meaningful human control help to ensure that human operators and commanders are making conscious decisions about the use of force, and that they have enough information when making those decisions to remain both legally and morally accountable for their actions. Furthermore, appropriate design and testing of a weapon system, along with proper training for human operators, helps to ensure that weapons will be controllable and that they will not pose unacceptable risks.

These are important concepts for understanding how human control is currently exercised in weapons that are not autonomous. However, it is not to say whether the concept of meaningful human control is useful or necessary, or whether the concept should ultimately be adopted by the CCW. Throughout the course of this week, states will need to consider the appropriate framework for thinking about human control over autonomous weapon systems.

We hope that these points are useful for states as they continue to think about the nature of human control over weapon systems. More information about CNAS research on this topic can be found in our reports on autonomous weapons and meaningful human control, which are available on our website at http://www.cnas.org/ethicalautonomy

  • Podcast
    • April 29, 2021
    Fire and Ice

    In this week’s edition of the SpyTalk podcast, Jeff Stein goes deep on the CIA’s looming eviction from Afghanistan with Lisa Curtis, a longtime former CIA, State Department an...

    By Lisa Curtis, Jeff Stein, Jeanne Meserve & Alma Katsu

  • Reports
    • April 28, 2021
    Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities

    Introduction An international debate over lethal autonomous weapon systems (LAWS) has been under way for nearly a decade.1 In 2012, the Department of Defense (DoD) issued for...

    By Robert O. Work

  • Podcast
    • March 25, 2021
    Are ‘killer robots’ the future of warfare?

    Paul Scharre joins host Suzanna Kianpour to discuss the technology, fears and even potential advantages of developing autonomous weapons. Listen to the full conversation from...

    By Paul Scharre

  • Commentary
    • Texas National Security Review
    • June 2, 2020
    The Militarization of Artificial Intelligence

    Militaries are racing to adopt artificial intelligence (AI) with the aim of gaining military advantage over competitors. And yet, there is little understanding of AI’s long-te...

    By Paul Scharre

View All Reports View All Articles & Multimedia