November 07, 2024
Controlling the danger: managing the risks of AI-enabled nuclear systems
Recent developments in artificial intelligence (AI) have accelerated the debate among US and European policymakers about the opportunities and risks of military AI. In many cases, this debate is an attempt to answer four simple questions with murky answers: what are the military advantages of AI? What are its risks? How can we balance the risks and opportunities? And how do we prevent these risks from devolving into a greater crisis? This paper deals with one of the risks: the further integration of AI and autonomous tools into the nuclear command, control, and communications (NC3) networks of nuclear-armed powers. I start by outlining some of the ways AI has previously been used in NC3 systems, then examine the risks that these systems pose, and finally offer some solutions that the international community can deploy.
Read the full article from the NATO Defense College.
More from CNAS
-
Tariffs and Tech: An Uncertain Recipe
Higher tariffs could prompt American cloud companies to shift more of their capital investments abroad....
By Pablo Chavez
-
Lessons in Learning
Executive Summary Although claims of a revolution in military affairs may be overhyped, the potential for artificial intelligence (AI) and autonomy to change warfare is growin...
By Josh Wallin
-
Human, Machine, War: How the Mind-Tech Nexus Will Win Future Wars
Air University Press has published Strategic Multilayer Assessment’s (SMA) latest book, Human, Machine, War: How the Mind-Tech Nexus Will Win Future Wars. Forewords by General...
By Samuel Bendett & Lt. Gen. Jack Shanahan
-
Five Objectives to Guide U.S. AI Diffusion
The Framework for AI Diffusion (the Framework) is an ambitious proposal to shape the global distribution of critical AI capabilities, maintain U.S. AI leadership, and prevent ...
By Janet Egan & Spencer Michaels