The Army rolled out its ATLAS targeting AI so clumsily that it blindsided the Pentagon’s own Joint Artificial Intelligence Center and inspired headlines about “AI-powered killing machines.” What went wrong? The answer lies in an ugly mix of misperceptions — fueled by the Army’s own longstanding struggles with the English language — and some very real loopholes in the Pentagon’s policyon lethal AI.
“The US Defense Department policy on autonomy in weapons doesn’t say that the DoD has to keep the human in the loop,” Army Ranger turned technologist Paul Scharre told me. “It doesn’t say that. That’s a common misconception.”
ATLAS came to public attention in about the worst way possible: an unheralded announcement on a federal contracting website, an indigestible bolus of buzzwords that meant one thing to insiders but something very different to everyone else — not just the general public but even civilian experts in AI.
Read the full article and more in Breaking Defense.