January 25, 2023

How ‘Killer Robots’ Can Help Us Learn from Mistakes Made in AI Policies

The use of lethal robots for law enforcement has turned from a science fiction concept to news snippets, thanks to recent high-profile debates in San Francisco and Oakland, Calif., as well as their actual use in Dallas. The San Francisco Board of Supervisors voted 8-3 to grant police the ability to use ground-based robots for lethal force when “when risk of loss of life to members of the public or officers is imminent and officers cannot subdue the threat after using alternative force options or other de-escalation tactics.” Following immediate public outcry, the board reversed course a week later and unanimously voted to ban the lethal use of robots. Oakland underwent a less public but similar process, and in January the Dallas Police Department used a robot to end a standoff.

While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI systems.

All of these events illustrate major pitfalls with the way that police currently use or plan to use lethal robots. Processes are rushed or nonexistent, conducted haphazardly, do not involve the public or civil society, and fail to create adequate oversight. These problems must be fixed in future processes that authorize artificial intelligence (AI) use in order to avoid controversy, collateral damage and even international destabilization.

The chief sin that a process can commit is to move too quickly. Decisions about how to use AI systems require careful deliberation and informed discussion, especially with something as high-stakes as the use of lethal force. A counter example here is the Department of Defense (DOD) Directive 3000.09, which covers the development and deployment of lethal autonomous systems. Because it lacks clarity for new technology and terminology, this decade-old policy is in the process of a lengthy, but deliberate, update.

Read the full article from the The Hill.

  • Podcast
    • July 9, 2024
    Quantum Computing in US-China Competition

    A conversation between Bonnie Glaser and Sam Howell discussing the quantum computing, its applications, and its place in US-China competition.PRINT ARTICLEChina Global Podcast...

    By Sam Howell & Bonnie Glaser

  • Reports
    • June 11, 2024
    Catalyzing Crisis

    Executive Summary The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (A...

    By Bill Drexel & Caleb Withers

  • Commentary
    • Just Security
    • June 6, 2024
    Open Source AI: The Overlooked National Security Imperative

    Now a global technological superpower, China does not want to repeat the mistakes of its past and is actively positioning itself to be the world’s AI leader....

    By Keegan McBride

  • Commentary
    • The Washington Post
    • May 30, 2024
    To Win the Chip War, the U.S. Must Prioritize Revolutionary Research

    Taking big bets on moonshot technologies is the only approach that can sustain Moore’s law and guarantee that the United States continues to lead in the technologies of tomorr...

    By Jordan Schneider, Arrian Ebrahimi & Chris Miller

View All Reports View All Articles & Multimedia