April 25, 2019
Paul Scharre Interviewed on the Raw Data Podcast
When we think of killer robots, images of the Terminator, Robocop, and other dystopian movies often spring to mind. These movies usually don’t end well (for the humans, at least). So it seems crazy that we would even consider building machines programmed to kill. On the other hand, some argue that autonomous weapons could save lives on the battlefield. We are not yet living in a world killer robots; but we might be getting close. What goes into the decision to kill? How can we possibly program robots to make the right decisions, given the moral stakes?
Listen to Paul Scharre's full conversation on the Raw Data Podcast:
More from CNAS
-
Technology & National Security
The Sovereignty Gap in U.S. AI StatecraftThis article was originally published in Lawfare. As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Thro...
By Pablo Chavez
-
Technology & National Security
America’s Key to Biotechnology Leadership? AI-Ready Biodata.This article was originally published in Just Security. From strengthening armor for U.S. warfighters to patching supply chain vulnerabilities, the convergence of AI and biote...
By Sam Howell & Michelle Holko
-
Technology & National Security
The Rise of the Answer MachinesThis article was originally published in Financial Times. Every spring, I take red-eyes from Austin, Texas, to Oxford, England, to teach a graduate seminar on AI and philosoph...
By Brendan McCord
-
Technology & National Security
Selling H200s to China Erodes Main U.S. AdvantageA new report says China could buy twice as much AI computing power as it can produce domestically if Nvidia H200 chips are allowed there. Janet Egan from the Center for a New ...
By Janet Egan