June 18, 2013
Rosa's Dystopia: The Moral Downside of Coming Autonomous Weapons Systems
Last Wednesday, Tom posted about one of the more provocative statements made during CNAS's fantastic annual conference. FP's Rosa Brooks, while discussing the morality of drones, implied that future drones with artificial intelligence would make better judgments than humans about when to kill in war. And if that's the case, she asked, how can we morally justify not using these drones? Brooks may be correct that drones one day will be better at making judgments about when to kill, yet the broader negative moral consequences of making AI drones the staple of our military far outweigh the benefits of better tactical decisions.
Drones with artificial intelligence (commonly referred to as autonomous weapons systems) do have the potential to make better decisions than humans on the battlefield because those systems will employ nearly perfect rational decision-making. Some may argue that we can never make a machine sophisticated enough to make all of the necessary decisions in an environment as complex as combat. However, Brooks reminded us that a decade ago the same was said about a computer's ability to drive a car. Yet Google's driverless cars have done exactly that -- in fact, they drive better than humans! Fellow panelist Ben FitzGerald agreed, saying that the technology will exist for autonomous weapons systems soon.
Such technology would bring some positive benefits. A massive decrease in casualties for U.S. forces represents the most obvious benefit. This would alleviate both the terrible human suffering associated with ground wars and some of the biggest long-term cost drivers of such conflicts. Autonomous weapons systems may also lead to fewer civilian casualties due to enhanced rational decision-making, which would enable them to make decisions absent the emotional stresses of combat.
However, more autonomous weapons systems on the battlefield would mean fewer humans on the battlefield, thereby reducing the costs of war and further insulating the public. The aforementioned benefit of fewer casualties and reduced human suffering represents a double-edged sword: Some already argue that the American public is too sheltered from the costs and burdens of our current wars; imagine how little attention the public would direct towards a war in which the only casualties were expensive erector sets that shoot. Ultimately, reducing the barriers to war makes war easier to choose. If it's easy to choose and the body politic doesn't care, there will be more wars.
Unfortunately, this isn't the only drawback. If we populate our military with autonomous weapons systems, our adversaries would adapt. States, and everyone else who fights these days, use war to force a policy on an adversary through violence, and our enemies wouldn't be able to change our policy by creating a scrap heap of our autonomous weapons systems on the battlefield. Instead, they'll go asymmetric and target our noncombatants because that would be the only way to truly make us hurt.
Although to some extent our enemies already do this, it's not their only option. We have people in uniform who have stood up and said, "me, not them." However, in a world where we only fight with autonomous weapons systems, targeting our civilians would represent our enemy's only hope for success.
And we're vulnerable. In the age of cyberattacks and terrorism, we need to look for policies that seek to further insulate our noncombatants rather than serve them up as the only viable targets for our enemies to attack in the hope of incurring real costs to American society. As someone who wears the uniform, I would welcome a world in which my friends and I did not have to place ourselves in harm's way to protect the nation. But my friends and I signed up so that our enemies will fight us instead of our families. And I worry that if humans don't fight our wars, we'll have more wars and our families will be the enemy's primary targets.