November 14, 2017

Lethal Autonomous Weapons and Policy-Making Amid Disruptive Technological Change

This week, countries are meeting at the United Nations to discuss lethal autonomous weapons and the line between human and machine decision-making. One complicating factor in these discussions is the rapid pace at which automation and artificial intelligence are advancing. Nations face a major challenge in setting policy on an emerging technology where the art of the possible is always changing.

When nations last met 18 months ago, DeepMind’s AlphaGo program had recently dethroned the top human player in the Chinese strategy game Go. AlphaGo reached superhuman levels of play by training on 30 million moves from human games, then playing against itself to improve even further. But that wasn’t enough. Just last month, DeepMind unveiled a new version called AlphaGo Zero that trained itself to play Go without any human input at all – only access to the board and the rules of the game. It defeated the 2016 version of AlphaGo 100 games to zero after only three days of self-play. This rapid pace of progress means nations face tremendous uncertainty about what might be possible with artificial intelligence even a few years into the future.

How should policymakers deal with this uncertainty?

Read the full commentary in Just Security.

View All Reports View All Articles & Multimedia