When we think of killer robots, images of the Terminator, Robocop, and other dystopian movies often spring to mind. These movies usually don’t end well (for the humans, at least). So it seems crazy that we would even consider building machines programmed to kill. On the other hand, some argue that autonomous weapons could save lives on the battlefield. We are not yet living in a world killer robots; but we might be getting close. What goes into the decision to kill? How can we possibly program robots to make the right decisions, given the moral stakes?
Listen to Paul Scharre's full conversation on the Raw Data Podcast:
More from CNAS
CommentaryThe China Challenge
The United States and China are strategic competitors, and technology is at the center of this competition, critical to economic strength and national security. The United Sta...
By Martijn Rasser, Elizabeth Rosenberg & Paul Scharre
CommentaryPreparing the Military for a Role on an Artificial Intelligence Battlefield
The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI)...
By Megan Lamberth
CommentaryArtificial Intel: Time Is Not On America’s Side
The United States reached a crucial milestone on its road to crafting a true national strategy for artificial intelligence (AI) this week. On Monday, the National Security Com...
By Martijn Rasser
CommentaryChina’s Military Biotech Frontier: CRISPR, Military-Civil Fusion, and the New Revolution in Military Affairs
China’s national strategy of military-civil fusion (军民融合, junmin ronghe) has highlighted biology as a priority. It is hardly surprising that the People’s Republic of China (PR...
By Elsa B. Kania & Wilson VornDick