July 09, 2014

Autonomy, “Killer Robots,” and Human Control in the Use of Force – Part II

By Paul Scharre

In a recent post , I covered how autonomy is currently used in weapons and what is different about potential future autonomous weapons that would select and engage targets on their own without direct human involvement. In this post, I will cover some of the implications for the current debate on autonomous weapons, in particular the concept proposed by some activists of a minimum standard for “meaningful human control.”

If fully autonomous weapons are so valuable, then where are they?

As I mentioned in a previous post, autonomous weapons do not generally exist today, although the Israeli Harpy, a loitering air vehicle that searches and destroys enemy radars, is a notable exception. It is worth examining why there are not more simple autonomous weapons like the Israeli Harpy in use today. It is curious that unlike, say, active protection systems for ground vehicles where there are multiple variants built by many countries, there is only a single example of a wide area loitering search-and-attack weapon (Harpy) in use today. This is particularly worth examining because one common assumption about autonomous weapons is that more sophisticated autonomy that begins to approach human-level cognition for targeting decisions raises challenging legal and moral issues but that states are likely to employ simple autonomous weapons in relatively uncluttered environments. That may be the case, but if so it begs the question why states have not already done so. The underlying technology behind the Harpy is not particularly sophisticated. In fact, the ability to build a wide area loitering anti-radiation weapon dates back several decades.

Read the full piece at Just Security.

View All Reports View All Articles & Multimedia