The Harop, a kamikaze drone, bolts from its launcher like a horse out of the gates. But it is not built for speed, nor for a jockey. Instead it just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance.
If the Harop is left alone, it will eventually fly back to a pre-assigned airbase, land itself and wait for its next job. Should an air-defence radar lock on to it with malicious intent, though, the drone will follow the radar signal to its source and the warhead nestled in its bulbous nose will blow the drone, the radar and any radar operators in the vicinity to kingdom come.
Israeli Aerospace Industries (iai) has been selling the Harop for more than a decade. A number of countries have bought the drone, including India and Germany. They do not have to use it in its autonomous radar-sniffing mode—it can be remotely piloted and used against any target picked up by its cameras that the operators see fit to attack. This is probably the mode in which it was used by Azerbaijan during its conflict with Armenia in Nagorno-Karabakh in 2016. But the Harops that Israel has used against air-defence systems in Syria may have been free to do their own thing.
In 2017, according to a report by the Stockholm International Peace Research Institute (sipri), a think-tank, the Harop was one of 49 deployed systems which could detect possible targets and attack them without human intervention. It is thus very much the sort of thing which disturbs the coalition of 89 non-governmental organisations (ngos) in 50 countries that has come together under the banner of the “Campaign to Stop Killer Robots”. The campaign’s name is an impressive bit of anti-branding; what well-adjusted non-teenager would not want to stop killer robots? The term chillingly combines two of the great and fearful tropes of science fiction: the peculiarly powerful weapon and the non-human intelligence.
Read the full article and more in The Economist.