Image credit: Justin Sullivan/Getty
January 25, 2023
How ‘Killer Robots’ Can Help Us Learn from Mistakes Made in AI Policies
The use of lethal robots for law enforcement has turned from a science fiction concept to news snippets, thanks to recent high-profile debates in San Francisco and Oakland, Calif., as well as their actual use in Dallas. The San Francisco Board of Supervisors voted 8-3 to grant police the ability to use ground-based robots for lethal force when “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or other de-escalation tactics.” Following immediate public outcry, the board reversed course a week later and unanimously voted to ban the lethal use of robots. Oakland underwent a less public but similar process, and in January the Dallas Police Department used a robot to end a standoff.
While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI systems.
All of these events illustrate major pitfalls with the way that police currently use or plan to use lethal robots. Processes are rushed or nonexistent, conducted haphazardly, do not involve the public or civil society, and fail to create adequate oversight. These problems must be fixed in future processes that authorize artificial intelligence (AI) use in order to avoid controversy, collateral damage and even international destabilization.
The chief sin that a process can commit is to move too quickly. Decisions about how to use AI systems require careful deliberation and informed discussion, especially with something as high-stakes as the use of lethal force. A counter example here is the Department of Defense (DOD) Directive 3000.09, which covers the development and deployment of lethal autonomous systems. Because it lacks clarity for new technology and terminology, this decade-old policy is in the process of a lengthy, but deliberate, update.
Read the full article from the The Hill.
More from CNAS
Two Books Warn About the privacy implications of AI and neurotechnology
Today's episode is all about tech. First, Paul Scharre of the Center for a New American Security speaks with NPR's Ari Shapiro about his new book, Four Battlegrounds: Power in...
By Paul Scharre
AI Arms Race, Drone Warfare and Cognitive Enhancement with Paul Scharre
The Grey Dynamics podcast spoke with Paul Scharre, the vice president and director of studies at the Center for New American Security (CNAS). They discussed the use of drones ...
By Paul Scharre
China’s Chip Industry Dismayed by Multilateral Export Controls
The original Chinese statement takes a much more indignant tone, reading more like an impassioned call to action to the Chinese domestic semiconductor industry to get its act ...
By Emily Jin
China’s Censors Are Afraid of What Chatbots Might Say
If Xi grows worried that, for instance, AI-powered automation will displace too many jobs and thus metastasize the risk of social unrest, he would have to make a hard choice b...
By Jordan Schneider & Nicholas Welch