Artificial Intelligence Security and Stability
The development of artificial intelligence (AI) is at a pivotal moment. AI capabilities continue to rapidly improve, with top-performing AI systems on the verge of eclipsing human expert-level performance in critical domains such as mathematics, coding, and science. In the midst of this rapidly shifting technological landscape, AI has become a source of intense geopolitical competition, particularly between the United States and China. Nations around the world are investing in AI to improve their military, intelligence, and other national security capabilities. AI policy has become a central issue for U.S. national security and industrial policy. How U.S. policymakers address these challenges, and how other nations and private companies respond, could have profound consequences for the future of AI development. The CNAS AI Security and Stability project aims to inform government decision-making on the most critical AI policy issues that will affect the future of AI development. Major lines of effort include:
- Understand and mitigate risks from national security relevant advanced AI capabilities: These risks can include dangerous capabilities in domains such as cyber operations, biological weapons, nuclear stability, and the financial sector; risks of future misalignment or loss of control from agentic AI systems; or systemic risks from U.S.-China AI competition.
- Understand and shape opportunities for compute governance: Compute is emerging as a key lever for AI governance as a consequence of technological and geopolitical trends. This line of effort will deliver concrete policy recommendations to preserve U.S. advantage and leverage in governing computing hardware and, by extension, the most capable frontier AI systems.
- Improve U.S. military processes for ensuring safe, secure, and trusted AI: This line of effort seeks to identify concrete ways that military AI systems may fail and how the U.S. military can establish policies that ensure AI capabilities are safe and trustworthy.
- Understand Chinese decision-making on AI and stability: This line of effort focuses on ways that advances in AI contribute to risks in the U.S.-China security relationship and how U.S. and allied policymakers can mitigate those risks.
- Inform the use of U.S. economic security tools to shape AI development and proliferation: This line of effort analyzes the policy and commercial drivers of AI diffusion globally, develops a U.S. economic security policy framework for responsible diffusion of AI, and assesses long-term risks to U.S. AI leadership from unintended consequences of broader U.S. economic policy decisions.
This cross-program effort includes the CNAS Technology and National Security, Defense, Indo-Pacific Security, and Energy, Economics, and Security programs. CNAS experts will share their findings in public reports and policy briefs with recommendations for policymakers. This project is made possible with the generous support of Coefficient Giving.
Further Reading
Lessons in Learning
Executive Summary Although claims of a revolution in military affairs may be overhyped, the potential for artificial intelligence (AI) and autonomy to change warfare is growin...
Promethean Rivalry
Executive Summary Just as nuclear weapons revolutionized 20th-century geopolitics, artificial intelligence (AI) is primed to transform 21st-century power dynamics—with world l...
Safe and Effective
The promise of artificial intelligence (AI) and autonomy to change the character of war inches closer to reality...
AI and the Evolution of Biological National Security Risks
New AI capabilities may reshape the risk landscape for biothreats in several ways. AI is enabling new capabilities that might, in theory, allow advanced actors to optimize bio...
Secure, Governable Chips
Broadly capable AI systems, built and deployed using specialized chips, are becoming an engine of economic growth and scientific progress. At the same time, these systems also...
Artificial Intelligence and Nuclear Stability
A lack of clear guidance risks forgoing valuable opportunities to use AI or, even worse, adopting AI in ways that might undermine nuclear surety and deterrence....
Catalyzing Crisis
The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, s...
To Avoid AI Catastrophes, We Must Think Smaller
These incidents are not theoretical, nor are they projections of long-term dangers; rather, these AI tools are already presenting tangible threats to individual health and wel...
Unknown: Killer Robots
What happens when a machine makes life-or-death decisions? A documentary from Netflix featuring analysis and commentary from Paul Scharre, Stacie Pettyjohn, and Robert Work ex...
Making Unilateral Norms for Military AI Multilateral
Without significant effort from the U.S., the political declaration could easily die on the vine, and with it a structure for building AI technology responsibly....
Hijacked AI Assistants Can Now Hack Your Data
Early adopters of powerful new AI tools should recognize that they are subjects of a large-scale experiment with a new kind of cyberattack....
To Stay Ahead of China in AI, the U.S. Needs to Work with China
An AI gold rush is underway in the private sector in the wake of ChatGPT, but the geopolitical stakes are even greater. The United States and China are vying for global leader...
AI's Inhuman Advantage
AI agents’ victories demonstrate that machines can dramatically outperform humans in command and control, a potential major advantage in war....
America Can Win the AI Race
If the United States wants to win the AI competition, it must approach Beijing carefully and construct its own initiatives thoughtfully....
U.S. and China Can Show World Leadership by Safeguarding Military AI
The US and China must move beyond unilateral statements and begin developing shared confidence-building measures to manage the risks of military AI competition....
AI Nuclear Weapons Catastrophe Can Be Avoided
AI-enabled nuclear weapons are particularly concerning due to their civilization-destroying nature....
Generative AI Could Be an Authoritarian Breakthrough in Brainwashing
The U.S. and allies should also invest aggressively into counter-propaganda capabilities that can mitigate the coming waves of generative AI propaganda — both at home and with...
How to counter China’s scary use of AI tech
In the face of these AI threats, democratic governments and societies need to work to establish global norms for lawful, appropriate and ethical uses of technologies like faci...
NOTEWORTHY: DoD Autonomous Weapons Policy
In this CNAS Noteworthy, Vice President and Director of Studies Paul Scharre breaks down the new Directive and what it means for the U.S. military’s approach to lethal autonom...
How ‘Killer Robots’ Can Help Us Learn from Mistakes Made in AI Policies
While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI s...
Decoupling Wastes U.S. Leverage on China
The ability to deny China access to advanced chips is a powerful advantage whose value is growing exponentially...
Putting Principles into Practice: How the U.S. Defense Department is Approaching AI
While the application of RAI is still at a nascent stage, the DoD’s continued messaging and prioritization of safe and ethical AI is important....
Debunking the AI Arms Race Theory
In 2015, a group of prominent AI and robotics researchers signed an open letter warning of the dangers of autonomous weapons. “The key question for humanity today,” they wrote...
AI and International Stability: Risks and Confidence-Building Measures
Exploring the potential use of confidence-building measures built around the shared interests that all countries have in preventing inadvertent war....
CNAS Experts
-
Paul Scharre
Executive Vice President
-
Stacie Pettyjohn
Senior Fellow and Director, Defense Program
-
Andrea Kendall-Taylor
Senior Fellow and Director, Transatlantic Security Program
-
Emily Kilcrease
Senior Fellow and Director, Energy, Economics and Security Program
-
Jacob Stokes
Senior Fellow and Deputy Director, Indo-Pacific Security Program
-
Janet Egan
Senior Fellow and Deputy Director, Technology and National Security Program
-
Josh Wallin
Fellow, Defense Program
-
Michael Depp
Research Associate, AI Security and Stability Project
-
Caleb Withers
Research Associate, Technology and National Security Program
-
Liam Epstein
Research Assistant, Artificial Intelligence Security and Stability Project
-
Tim Fist
Senior Adjunct Fellow, Technology and National Security Program