April 30, 2020
When machine learning comes to nuclear communication systems
Nuclear deterrence depends on fragile, human perceptions of credibility.
As states armed with nuclear weapons turn to machine learning techniques to enhance their nuclear command, control, and communications (NC3) systems, the United States and its competitors should take care that these new tools do not inadvertently accelerate crisis instability or an arms race.
NC3 Systems and Credibility
Stability between competing nations largely relies on ascertaining the credibility of threats, capabilities, and decisions in order to decrease uncertainty and reduce the risk of conflict. Throughout the Cold War, NC3 systems served as mechanisms for signaling intent and capability, ensuring the credibility of deterrence postures, and decreasing the risk of nuclear war. In the early 21st century, technological developments such as machine learning techniques are introducing new dynamics and capabilities that increase uncertainty and lend to “strategic instability.”
Read the full article from C4ISRNET.
Learn more about the Artificial Intelligence and International Stability Project:
Artificial Intelligence and International Stability Project
Despite calls from prominent scientists to avoid militarizing AI, nation-states are already using AI and machine-learning tools for national security purposes. AI has the pote...
Read MoreMore from CNAS
-
Transcript
On March 5, 2021, the CNAS Technology and National Security Program hosted a virtual discussion on AI and the role of confidence-building measures. This event is a part of the...
By Paul Scharre, Helen Toner, Michael Horowitz & Kerstin Vignard
-
Commentary
Not all artificial intelligence (AI) is made equal. A wide range of different techniques and applications fall under the term “AI.” Some of these techniques and applications w...
By Maaike Verbruggen
-
Commentary
Artificial intelligence and robotic technologies with semi-autonomous learning, reasoning, and decision-making capabilities are increasingly being incorporated into defense, m...
By Dr. David Danks
-
Commentary
In artificial intelligence circles, we hear a lot about adversarial attacks, especially ones that attempt to “deceive” an AI into believing, or to be more accurate, classifyin...
By Heather Roff