The global competition to develop fully autonomous weapons systems guided by artificial intelligence risks developing into a full-blown arms race, according to a new report from a Dutch peace group.
Lethal autonomous weapons, or “killer robots,” as they are described by Pax, the anti-war NGO behind the report, are designed to select and engage targets without proximate human control. Their advent has been called the “third revolution in warfare” by AI experts—a successor to the invention of gunpowder and the creation of nuclear bombs.
Seven countries are known to be developing lethal autonomous weapons: the US, China, Russia, the UK, France, Israel, and South Korea. Right now, US military policy mandates some level of human interaction when actually making the decision to fire. The other countries maintain they support a ban on fully autonomous lethal weapons. China, however, supports a ban on their use but not on their development.
“Lethal autonomous weapons raise many legal, ethical and security concerns,” the Pax report says. “It would be deeply unethical to delegate the decision over life and death to a machine or algorithms.” Machines on their own are “unlikely to comply” with the laws of war or have the ability to distinguish between civilians and combatants. Pax also foresees an “accountability vacuum” after improper or illegal acts.
Read the full article and more in Quartz.