When we think of killer robots, images of the Terminator, Robocop, and other dystopian movies often spring to mind. These movies usually don’t end well (for the humans, at least). So it seems crazy that we would even consider building machines programmed to kill. On the other hand, some argue that autonomous weapons could save lives on the battlefield. We are not yet living in a world killer robots; but we might be getting close. What goes into the decision to kill? How can we possibly program robots to make the right decisions, given the moral stakes?
Listen to Paul Scharre's full conversation on the Raw Data Podcast:
More from CNAS
CommentaryGeopolitics Keeps Overruling Cyber Norms, So What’s the Alternative?
Once governments accept the limits of political cyber norms, they can then adapt to the messier reality of cyberspace today....
By Laura G. Brent
VideoUS must take proactive steps to avert future 6G security issues
Martijn Rasser, senior fellow and director of the Technology and National Security Program at the Center for a New American Security (CNAS), joins Government Matters to discus...
By Martijn Rasser
CommentarySecuring the Global Digital Economy Beyond the China Challenge
A revised route to digital modernization, premised on open participation, can not only offset the local costs of China’s cyber and influence power, but pave the way for an equ...
By Ainikki Riikonen
CommentaryCan America meet its next Sputnik moment?
A new Sputnik spirit today can power American technological competitiveness into the future....
By Megan Lamberth