When we think of killer robots, images of the Terminator, Robocop, and other dystopian movies often spring to mind. These movies usually don’t end well (for the humans, at least). So it seems crazy that we would even consider building machines programmed to kill. On the other hand, some argue that autonomous weapons could save lives on the battlefield. We are not yet living in a world killer robots; but we might be getting close. What goes into the decision to kill? How can we possibly program robots to make the right decisions, given the moral stakes?
Listen to Paul Scharre's full conversation on the Raw Data Podcast:
More from CNAS
CommentaryRethinking Research Security
Research security under the China Initiative may damage America’s ability to innovate and continue defining the cutting edge of technological research in the long term....
By Ainikki Riikonen & Emily Weinstein
CommentaryThe Dangers of Potential Russian Counter-UAV Technology Exports to Latin America
The unmanned aerial vehicle (UAV) technology has proliferated globally, resulting in myriad uses, both military and civilian. With the steady rise in non-military uses comes t...
By Samuel Bendett
VideoFormer deputy secretary of defense calls for technology competitiveness council
On June 9, 2021, Robert Work discussed his recommendations for establishing a technology competitiveness council under the vice president to create national strategies for sev...
By Robert O. Work
CommentarySharper: Defense Tech
The prevalence of artificial intelligence (AI) systems, the growing centrality of information warfare, and threats to traditional command and control are redefining combat in ...
By Jennie Matuschak, Ainikki Riikonen & Anna Pederson