July 29, 2016

Who’ll want artificially intelligent weapons? ISIS, democracies, or autocracies?

By Michael Horowitz

One of the biggest fears about the nexus of artificial intelligence and the military is that machine learning—a type of artificial intelligence that allows computers to learn from new data without being explicitly programmed—could spread rapidly in military systems, and even risk an arms race. And the alarm over the consequences of robots armed with weapons even extended to the recent use of a remotely piloted, rather than autonomous, bomb disposal robot retrofitted with an explosive by the Dallas Police Department in July 2016. That event triggered a wave of articles about the consequences of robots with weapons, especially when used outside the military.

Discussions of the military applications of robotics have tended to focus on the United States, largely because of America’s extensive use of uninhabited (also called unmanned) aerial vehicles,  or “drones,” to conduct surveillance and launch lethal strikes against suspected militants around the world. Yet limiting the discussion of military robotics to those developed by wealthy, democratic countries such as the United States may miss important underlying trends.

Read the full article on the Bulletin of the Atomic Scientists.

  • Commentary
    • Breaking Defense
    • November 6, 2019
    Artificial Intel: Time Is Not On America’s Side

    The United States reached a crucial milestone on its road to crafting a true national strategy for artificial intelligence (AI) this week. On Monday, the National Security Com...

    By Martijn Rasser

  • Commentary
    • War on the Rocks
    • July 18, 2019
    In Search of Ideas: The National Security Commission on Artificial Intelligence Wants You

    Americans don’t want to grow old wondering what happened to their country’s place in the world. U.S. global leadership has fostered international institutions, strengthened hu...

    By Robert O. Work & Eric Schmidt

  • Podcast
    • June 20, 2019
    What Could Possibly Go Wrong?

    War has been a driver of breakthrough technology for a long time. The first waves of artificial intelligence and even the internet came out of DARPA, a defense agency whose or...

    By Paul Scharre, Richard Danzig, Arati Prabhakar & Jonathan Wilson

  • Video
    • May 6, 2019
    2019 Drell Lecture: Autonomous Weapons and the Future of War

    On April 30, 2019, CNAS Technology and National Security Program Director Paul Scharre delivered the 2019 Drell Lecture on the campus of Stanford University. Scharre's remarks...

    By Paul Scharre

View All Reports View All Articles & Multimedia