June 22, 2018
Relax, Google, the Robot Army Isn’t Here Yet
People can differ on their perceptions of "evil." People can also change their minds. Still, it's hard to wrap one's head around how Google, famous for its "don't be evil" company motto, dealt with a small Defense Department contract involving artificial intelligence.
Facing a backlash from employees, including an open letter insisting the company "should not be in the business of war," Google in April grandly defended involvement in a project "intended to save lives and save people from having to do highly tedious work."
Less than two months later, chief executive officer Sundar Pichai announced that the contract would not be renewed, writing equally grandly that Google would shun AI applications for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
To the surprise of exactly nobody familiar with Silicon Valley's flexible ethics, he was quick to add that Google "will continue our work with governments and the military in many other areas" including cybersecurity, training and military recruitment. Because we all know that military training has nothing whatsoever to do with facilitating injuries to people.
Google's moral posturing aside, the brouhaha over Project Maven does raise a whole lot of important questions over what defense, national security and law-enforcement applications of artificial intelligence will mean for humanity in the near and distant futures. So I decided to pose some of them to somebody who's been giving the whole thing deep thought: Paul Scharre, author of a new book, "Army of None: Autonomous Weapons and the Future of War."
Scharre, a former Army ranger who was deployed to Afghanistan and Iraq, is now the director of the technology and national security program at the Center for a New American Security, a Washington think-tank founded by some heavy hitters from the Obama administration's Defense and State Departments. Here is a lightly edited transcript of our discussion:
Tobin Harshaw: Let's start with the specific then move to the general. Many people know that Google decided not to renew its contract with the Pentagon on Project Maven. Very few people probably know what Project Maven is. Can you briefly describe it, and explain how AI -- machine learning -- factors into it?
Paul Scharre: The essence is using artificial intelligence to better process drone imagery so the people can understand it. In the public imagination, drones are often synonymous with drone strikes. For the military, the real value that drones bring to the table is their ability to do persistent surveillance. Most of the time they're doing reconnaissance missions -- just watching -- and they're following people and mapping terrorist networks and scooping up volumes of data that are very hard for humans to process.
Read the full interview at Bloomberg
More from CNAS
-
Technology & National Security
The Rise of the Answer MachinesThis article was originally published in Financial Times. Every spring, I take red-eyes from Austin, Texas, to Oxford, England, to teach a graduate seminar on AI and philosoph...
By Brendan McCord
-
Technology & National Security
Selling H200s to China Erodes Main U.S. AdvantageA new report says China could buy twice as much AI computing power as it can produce domestically if Nvidia H200 chips are allowed there. Janet Egan from the Center for a New ...
By Janet Egan
-
Technology & National Security
CNAS Insights | Unpacking the H200 Export PolicyAI Chips for China With two new policies, President Donald Trump has implemented his pledge to allow sales of NVIDIA’s H200 AI chips to China in exchange for a quarter of the ...
By Janet Egan & James Sanders
-
Indo-Pacific Security / Technology & National Security
AI and Policy, Both Foreign and DomesticIn an episode recorded just before Christmas, Darren interviews Janet Egan, Senior Fellow and Deputy Director of the Technology and National Security Program at CNAS, about AI...
By Janet Egan
