May 05, 2020
AI & Military Procurement: What Computers Still Can’t Do
Not all artificial intelligence (AI) is made equal. A wide range of different techniques and applications fall under the term “AI.” Some of these techniques and applications work really well, such as image recognition. But other AI applications, especially those focused on prediction, have fallen short of expectations.
Fielding AI before it is ready could lead to complacency amongst soldiers, inaccurate predictions, and strategic manipulation by users to game the system. Military procurement officers need to learn basic AI literacy to become smart buyers and ensure that AI is not fielded prematurely.
Read the full article from War on the Rocks.
Learn more about the Artificial Intelligence and International Stability Project:
Artificial Intelligence and International Stability Project
Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...
Read MoreMore from CNAS
-
Transcript from Artificial Intelligence and the Role of Confidence-Building Measures
Transcript
On March 5, 2021, the CNAS Technology and National Security Program hosted a virtual discussion on AI and the role of confidence-building measures. This event is a part of the...
By Paul Scharre, Helen Toner, Michael Horowitz & Kerstin Vignard
-
When machine learning comes to nuclear communication systems
Commentary
Nuclear deterrence depends on fragile, human perceptions of credibility. As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear com...
By Philip Reiner, Alexa Wehsener & M. Nina Miller
-
How Adversarial Attacks Could Destabilize Military AI Systems
Commentary
Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, m...
By Dr. David Danks
-
AI Deception: When Your Artificial Intelligence Learns to Lie
Commentary
In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifyin...
By Heather Roff