CENTER FOR A NEW AMERICAN SECURITY: How do you stop a Terminator scenariobefore it starts? Real US robots won’t take over like the fictional SkyNet, Pentagon officials promise, because a human being will always be “in the loop,” possessing the final say on whether or not to use lethal force.
But by the time the decision comes before that human operator, it’s probably too late, warns Richard Danzig. In a new report, the respected ex-Navy Secretary argues that we need to design in safeguards from the start.
“In the design of the machine, we ought to recognize we can’t rely on the human as much as we’d like to think,” Danzig said: We need to design the machine from the start with an eye on what could go wrong.
Danzig’s model is the extensive protections put in place for nuclear weapons, from physical controls to arms control. But nukes were actually easy to corral compared to emerging technologies like Artificial Intelligenceand genetic editing, he told me in an interview. Nukes can’t think for themselves, make copies of themselves or change their own controlling code.
What’s more, nuclear weapons are purely that, weapons: You don’t use a nuke to decide whether you should use your nukes. Computers, by contrast, have become essential to how we get and use information.
Read the Full Article at Breaking Defense