April 23, 2018

The promise and peril of military applications of artificial intelligence

Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminator franchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom’s concern about the existential risk to humanity posed by artificial intelligence to Telsa founder Elon Musk’s concern that artificial intelligence could trigger World War III to Vladimir Putin’s statement that leadership in AI will be essential to global power in the 21st century.

What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon. Instead, artificial intelligence, from a military perspective, is an enabler, much like electricity and the combustion engine. Thus, the effect of artificial intelligence on military power and international conflict will depend on particular applications of AI for militaries and policymakers. What follows are key issues for thinking about the military consequences of artificial intelligence, including principles for evaluating what artificial intelligence “is” and how it compares to technological changes in the past, what militaries might use artificial intelligence for, potential limitations to the use of artificial intelligence, and then the impact of AI military applications for international politics.

Read the full article at The Bulletin of the Atomic Scientists

  • Commentary
    • Foreign Policy
    • December 4, 2024
    Trump Must Rebalance America’s AI Strategy

    The disagreements about AI progress are so fundamental and held with such conviction that they have evoked comparisons to a “religious schism” among technologists....

    By Bill Drexel & Ruby Scanlon

  • Commentary
    • November 26, 2024
    Guidance for the 2025 AI Action Summit in Paris

    In September 2024, the French government, in collaboration with civil society partners, invited technical and policy experts to share their opinions on emerging technology iss...

    By Janet Egan, Michael Depp, Noah Greene & Caleb Withers

  • Commentary
    • Sharper
    • November 20, 2024
    Sharper: Trump 2.0

    Donald Trump's return to the White House is widely expected to reshape America's global priorities. With personnel choices and policy agendas that mark a significant break fro...

    By Charles Horn & Gwendolyn Nowaczyk

  • Podcast
    • November 18, 2024
    Team America

    Kate Kuzminski, Deputy Director of Studies, and the Director of the Military, Veterans, and Society (MVS) Program at CNAS, joins to discuss President-elect Donald Trump nomina...

    By Katherine L. Kuzminski

View All Reports View All Articles & Multimedia