June 22, 2018

AI researchers should help with some military work

By Gregory C. Allen

In January, Google chief executive Sundar Pichai said that artificial intelligence (AI) would have a “more profound” impact than even electricity. He was following a long tradition of corporate leaders claiming their technologies are both revolutionary and wonderful.

The trouble is that revolutionary technologies can also revolutionize military power. AI is no exception. On 1 June, Google announced that it would not renew its contract supporting a US military initiative called Project Maven. This project is the military’s first operationally deployed ‘deep-learning’ AI system, which uses layers of processing to transform data into abstract representations — in this case, to classify images in footage collected by military drones. The company’s decision to withdraw came after roughly 4,000 of Google’s 85,000 employees signed a petition to ban Google from building “warfare technology”.

Such recusals create a great moral hazard. Incorporating advanced AI technology into the military is as inevitable as incorporating electricity once was, and this transition is fraught with ethical and technological risks. It will take input from talented AI researchers, including those at companies such as Google, to help the military to stay on the right side of ethical lines.

Last year, I led a study on behalf of the US Intelligence Community, showing that AI’s transformative impacts will cover the full spectrum of national security. Military robotics, cybersecurity, surveillance and propaganda are all vulnerable to AI-enabled disruption. The United States, Russia and China all expect AI to underlie future military power, and the monopoly enjoyed by the United States and its allies on key military technologies, such as stealth aircraft and precision-guided weapons, is nearing an end.


Read the Full Article at Nature

  • Podcast
    • March 16, 2020
    The Cyberlaw Podcast: The (Almost) COVID-19-Free Episode

    If your podcast feed has suddenly become a steady diet of more or less the same COVID-19 stories, here’s a chance to listen to cyber experts talk about what they know about – ...

    By Elsa B. Kania

  • Commentary
    • Slate
    • February 19, 2020
    Faux News Articles and Social Media Posts Will Haunt This Election

    Last September, an image of a New York Times headline began circulating online, claiming that Abdullah Abdullah, a candidate for the Afghan presidency, had taken millions of d...

    By Chris Estep & Megan Lamberth

  • Commentary
    • Council on Foreign Relations
    • February 12, 2020
    The Dangers of Manipulated Media in the Midst of a Crisis

    In the immediate aftermath of the U.S. drone strike that killed Iranian General Qasem Soleimani, the internet was flooded with purportedly real-time information about the circ...

    By Megan Lamberth

  • Commentary
    • Defense One
    • January 28, 2020
    Great Powers Must Talk to Each Other About AI

    Imagine an underwater drone armed with nuclear warheads and capable of operating autonomously. Now imagine that drone has lost its way and wandered into another state’s territ...

    By Elsa B. Kania & Dr. Andrew Imbrie

View All Reports View All Articles & Multimedia