Nations from around the world met at the United Nations in Geneva, Switzerland to discuss autonomous weapons, potential future weapons that would select and engage targets on their own. Ensuring “meaningful human control” over future weapons has been a topic of much debate, with some human rights activists advocating for a preemptive ban. Increasing autonomy in weapons raises the question of how much human involvement is required in lethal attacks.
In this brief, Scharre and Sayler explain how autonomy is already used in many weapons today and how future fully autonomous weapons would be different. Autonomous weapons would be programmed by humans and launched by a human. Once launched, however, the weapon would have the freedom to select its own targets over a wide area according to preprogramed parameters, raising new legal, ethical, and safety questions.
The report is available online.
More from CNAS
PodcastFire and Ice
In this week’s edition of the SpyTalk podcast, Jeff Stein goes deep on the CIA’s looming eviction from Afghanistan with Lisa Curtis, a longtime former CIA, State Department an...
By Lisa Curtis, Jeff Stein, Jeanne Meserve & Alma Katsu
ReportsPrinciples for the Combat Employment of Weapon Systems with Autonomous Functionalities
Introduction An international debate over lethal autonomous weapon systems (LAWS) has been under way for nearly a decade.1 In 2012, the Department of Defense (DoD) issued for...
By Robert O. Work
PodcastAre ‘killer robots’ the future of warfare?
Paul Scharre joins host Suzanna Kianpour to discuss the technology, fears and even potential advantages of developing autonomous weapons. Listen to the full conversation from...
By Paul Scharre
CommentaryThe Militarization of Artificial Intelligence
Militaries are racing to adopt artificial intelligence (AI) with the aim of gaining military advantage over competitors. And yet, there is little understanding of AI’s long-te...
By Paul Scharre