January 25, 2023
How ‘Killer Robots’ Can Help Us Learn from Mistakes Made in AI Policies
The use of lethal robots for law enforcement has turned from a science fiction concept to news snippets, thanks to recent high-profile debates in San Francisco and Oakland, Calif., as well as their actual use in Dallas. The San Francisco Board of Supervisors voted 8-3 to grant police the ability to use ground-based robots for lethal force when “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or other de-escalation tactics.” Following immediate public outcry, the board reversed course a week later and unanimously voted to ban the lethal use of robots. Oakland underwent a less public but similar process, and in January the Dallas Police Department used a robot to end a standoff.
While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI systems.
All of these events illustrate major pitfalls with the way that police currently use or plan to use lethal robots. Processes are rushed or nonexistent, conducted haphazardly, do not involve the public or civil society, and fail to create adequate oversight. These problems must be fixed in future processes that authorize artificial intelligence (AI) use in order to avoid controversy, collateral damage and even international destabilization.
The chief sin that a process can commit is to move too quickly. Decisions about how to use AI systems require careful deliberation and informed discussion, especially with something as high-stakes as the use of lethal force. A counter example here is the Department of Defense (DOD) Directive 3000.09, which covers the development and deployment of lethal autonomous systems. Because it lacks clarity for new technology and terminology, this decade-old policy is in the process of a lengthy, but deliberate, update.
Read the full article from the The Hill.
More from CNAS
-
Energy, Economics & Security / Technology & National Security
Beyond Bans: Expanding the Policy Options for Tech-Security ThreatsStuck between a rock (the fact that banning all Chinese tech that poses a risk is expensive and impractical) and a hard place (the fact that many existing mitigation proposals...
By Geoffrey Gertz
-
Indo-Pacific Security / Technology & National Security
Cyber Crossroads in the Indo-PacificThe Indo-Pacific faces a cyber crossroads. Down one path lies deeper military, intelligence, and economic ties between Washington and its key allies and partners in this strat...
By Vivek Chilukuri, Lisa Curtis, Janet Egan, Morgan Peirce, Elizabeth Whatcott & Nathaniel Schochet
-
Technology & National Security
Securing America’s AI Future: Federal Research and Development PrioritiesOn April 29, 2025, the White House Office of Science and Technology Policy (OSTP) issued a Request for Information on the Development of a 2025 National Artificial Intelligenc...
By Caleb Withers & Spencer Michaels
-
Middle East Security / Technology & National Security
‘We Want Peace’: How Attacks Between Israel and Iran Could Impact People in NCRetired Lt. Gen. Jack Shanahan is an adjunct senior fellow at the Center for New American Security. Shanahan provided some context on how the two Middle East countries got her...
By Lt. Gen. Jack Shanahan