It was almost a year ago when Google employees made waves from Silicon Valley to Washington, D.C., by signing a letter objecting to the company’s work with the Defense Department’s Project Maven.
The effort — to develop AI systems capable of analyzing reams of full-motion video data collected by drones that would then tip off human analysts when people and events of interest pop up — was viewed by the employees as Google being in the business of war. Eventually, the company chose to not pursue another Project Maven contract.
But the brouhaha may have been mitigated if the Pentagon and Silicon Valley knew how to better communicate, said Paul Scharre, director of the Center for a New American Security’s technology and national security program.
“There’s not a lot of crosstalk and crosspollination between these communities — between policymakers and those in the AI community who are concerned,” said Scharre, who is also the author of the book Army of None: Autonomous Weapons and the Future of War.
To try and bridge the gap, CNAS is spearheading a new effort — known as the Project on Artificial Intelligence and International Stability — to create more dialogue between policymakers, the developers of AI platforms and national security experts working outside of government.
Read the full article and more in National Defense Magazine.