The concept of “meaningful human control” has been repeatedly raised during this week’s discussions. Many countries have said they support meaningful human control, while others have said they would like to explore the idea further. Still others have said that the concept of “meaningful human control” is vague and unhelpful.
As part of our ongoing research on autonomous weapons at the Center for a New American Security, we have examined the concept of meaningful human control, and while we did not originate this concept, we would like to offer some ideas for consideration, particularly with regard to how control is exercised in weapons today.
If the international community is concerned about a future in which autonomy might cause us to lose human control – whether the nature of that control is considered to be meaningful, effective, adequate, etc. – we thought it would be useful to understand what might be different between the weapons of today and future, lethal autonomous weapons.
We find that meaningful human control in weapons today has three essential components:
First, human operators make informed, conscious decisions about the use of weapons.
Second, human operators have sufficient information to ensure the lawfulness of a particular action, given what they know about the target, the weapon, and the action’s context.
Third, the weapon is designed and tested in a realistic operating environment, and human operators are properly trained to ensure effective control over the use of the weapon.
These standards of meaningful human control help to ensure that human operators and commanders are making conscious decisions about the use of force, and that they have enough information when making those decisions to remain both legally and morally accountable for their actions. Furthermore, appropriate design and testing of a weapon system, along with proper training for human operators, helps to ensure that weapons will be controllable and that they will not pose unacceptable risks.
These are important concepts for understanding how human control is currently exercised in weapons that are not autonomous. However, it is not to say whether the concept of meaningful human control is useful or necessary, or whether the concept should ultimately be adopted by the CCW. Throughout the course of this week, states will need to consider the appropriate framework for thinking about human control over autonomous weapon systems.
We hope that these points are useful for states as they continue to think about the nature of human control over weapon systems. More information about CNAS research on this topic can be found in our reports on autonomous weapons and meaningful human control, which are available on our website at http://www.cnas.org/ethicalautonomy.
More from CNAS
PodcastCNAS Tech: How (Not) to Talk About AI & Lethality
The U.S. Army recently announced its new Advanced Targeting & Lethality Automated System, or ATLAS program. The announcement generated concern and media headlines about the le...
By Paul Scharre, Kara Frederick & Megan Lamberth
VideoWill WWIII Be Fought By Robots?
What will autonomous weapons mean for how future wars are waged and the loss of human lives in armed conflicts? That's the topic of a new book, Army of None: Autonomous Weapon...
By Paul Scharre
CommentaryA Million Mistakes a Second
Militaries around the globe are racing to build ever more autonomous drones, missiles, and cyberweapons. Greater autonomy allows for faster reactions on the battlefield, an ad...
By Paul Scharre
CommentarySix arrested after Venezuelan president dodges apparent assassination attempt
Venezuelan President Nicolas Maduro was speaking at a military event when a drone carrying plastic explosives detonated on Saturday. CNAS Technology and National Security Dire...
By Paul Scharre