N THE 1970 science fiction film “Colossus: The Forbin Project,” the United States decides to turn over control of its strategic arsenal to Colossus, a massive supercomputer. Big mistake. Almost immediately it becomes clear that, as its creator Dr. Charles Forbin says, “Colossus is built even better than we thought.” In fact, it’s a self-aware artificial intelligence — quickly discovering that the Soviets have also activated an almost identical system and joining up with it to take over the planet. Along the way, Colossus nukes a Russian oil complex and a U.S missile base to enforce its control. Now, instead of two human superpowers threatening nuclear Armageddon, humanity’s continued survival is at the mercy (or mercy’s AI equivalent) of a supercomputer.
“The object in constructing me was to prevent war,” Colossus announces. “This object is attained. I will not permit war. It is wasteful and pointless. … Man is his own worst enemy. … I will restrain man.” To its cool machine reasoning, it’s all perfectly rational. But its definition of rationality differs tragically from that of human beings.
We’re in no danger of a Colossus taking over the planet, at least not yet. But the prospect of lethal autonomous weapons (AWs) under nonhuman control is all too real and immediate. As Paul Scharre points out in “Army of None: Autonomous Weapons and the Future of War,” we already have robots doing everything from cleaning the living room to driving cars to tracking down (and sometimes taking out) terrorists. The step from armed drones controlled remotely by humans to fully autonomous machines that can find, target, and kill all on their own is less a matter of technology than our own choice: Do we turn on Colossus or not?
Read the Full Review at Undark