As we have heard this week, artificial intelligence and autonomy are rapidly advancing. While no nation has said they will build autonomous weapons, the technology will make such systems possible and, indeed, is already possible today for simple missions.
What would be the consequences if we were to delegate targeting decisions to machines? Would it make war more precise and humane, saving lives? Or would it lead to more accidents and less human responsibility?
A major challenge in answering these questions is the fact that the technology is constantly changing. 18 months ago, when this body last met in 2016, the AI research company DeepMind had just released AlphaGo, a computer program that beat the top human player at Go. To accomplish that feat, DeepMind trained AlphaGo on 30 million human moves so that it could learn how to play the game.
Last month, DeepMind released a new version, AlphaGo Zero, that learned how to play Go on its own without any human training data at all. Within a mere 3 days of self-play, it was good enough to beat the version from 2016 100 games to zero.
With technology moving forward at this pace, what will be possible 10 or even 5 years from now?
If we agree to foreswear some technology, we could end up giving up some uses of automation that could make war more humane. On the other hand, a headlong rush into a future of increasing autonomy, with no discussion of where it is taking us, is not in humanity’s interests either. We should control our destiny.
Instead, we should ask: “What role do we want humans to have in lethal decision-making in war?”
It is important to understand the technology, but to answer this question we need to focus on the human. The technology changes, but the human stays the same.
What decisions in war require uniquely human judgment? If we had all of the technology we could imagine, what decisions would we still want people to make in war, and why?
This concept has been formulated many ways and many states have expressed the importance of meaningful, appropriate, or necessary human judgment or control. The specific term is less important. What is important is that states continue to explore the meaning behind these terms in order to better understand the legal, moral, operational, and strategic rationale for human involvement in the use of force.
This perspective – focusing on the human – can be our guiding light for navigating our way through this period of technological change.
Paul Scharre (@paul_scharre) is a senior fellow at the Center for a New American Security and author of the forthcoming book Army of None: Autonomous Weapons and the Future of War, to be published in April 2018.
The remarks are available online.
More from CNAS
PodcastEpisode 26 - Paul Scharre
What are autonomous weapons systems? How are they used in modern warfare? And how do we strengthen international cooperation? In this episode, the Director of the Technology a...
By Paul Scharre
PodcastRobots That Kill
By Paul Scharre
TranscriptTranscript from CNAS Report Launch Event: "Securing Our 5G Future"
On November 7, the CNAS Technology and National Security Program hosted a launch event for the Securing Our 5G Future report. We are pleased to share the transcript of this ev...
By Martijn Rasser, Elsa B. Kania & Rob Strayer
PodcastCNAS Tech: How (Not) to Talk About AI & Lethality
The U.S. Army recently announced its new Advanced Targeting & Lethality Automated System, or ATLAS program. The announcement generated concern and media headlines about the le...
By Paul Scharre, Kara Frederick & Megan Lamberth