Artificial Intelligence Safety and Stability

Nations around the world are investing in artificial intelligence (AI) to improve their military, intelligence, and other national security capabilities. Yet AI technology, at present, has significant safety and security vulnerabilities. AI systems could fail, potentially in unexpected ways, due to a variety of causes. Moreover, the interactive nature of military competition means that one nation’s actions affect others, including in ways that may be detrimental to mutual stability. There is an urgent need to explore actions that can mitigate these risks, such as improved processes for AI assurance, norms and best practices for responsible AI adoption, and confidence-building measures that improve stability among all nations.

The Center for a New American Security (CNAS) Artificial Intelligence Safety and Stability project aims to better understand AI risks and specific steps that can be taken to improve AI safety and stability in national security applications. Major lines of effort include:

This cross-program effort includes the CNAS Technology and National Security, Defense, Indo-Pacific Security, Transatlantic Security, and Energy, Economics, and Security programs. CNAS experts will share their findings in public reports and policy briefs with recommendations for policymakers.

This project is made possible with the generous support of Open Philanthropy.

Learn More:

Further Reading:


To Avoid AI Catastrophes, We Must Think Smaller

These incidents are not theoretical, nor are they projections of long-term dangers; rather, these AI tools are already presenting tangible threats to individual health and wel...

Technology & National Security

Unknown: Killer Robots

What happens when a machine makes life-or-death decisions? A documentary from Netflix featuring analysis and commentary from Paul Scharre, Stacie Pettyjohn, and Robert Work ex...

Technology & National Security

Making Unilateral Norms for Military AI Multilateral

Without significant effort from the U.S., the political declaration could easily die on the vine, and with it a structure for building AI technology responsibly....

Technology & National Security

Hijacked AI Assistants Can Now Hack Your Data

Early adopters of powerful new AI tools should recognize that they are subjects of a large-scale experiment with a new kind of cyberattack....

Technology & National Security

To Stay Ahead of China in AI, the U.S. Needs to Work with China

An AI gold rush is underway in the private sector in the wake of ChatGPT, but the geopolitical stakes are even greater. The United States and China are vying for global leader...

Technology & National Security

AI's Inhuman Advantage

AI agents’ victories demonstrate that machines can dramatically outperform humans in command and control, a potential major advantage in war....

Technology & National Security

America Can Win the AI Race

If the United States wants to win the AI competition, it must approach Beijing carefully and construct its own initiatives thoughtfully....

Technology & National Security

U.S. and China Can Show World Leadership by Safeguarding Military AI

The US and China must move beyond unilateral statements and begin developing shared confidence-building measures to manage the risks of military AI competition....

Technology & National Security

AI Nuclear Weapons Catastrophe Can Be Avoided

AI-enabled nuclear weapons are particularly concerning due to their civilization-destroying nature....

Technology & National Security

Generative AI Could Be an Authoritarian Breakthrough in Brainwashing

The U.S. and allies should also invest aggressively into counter-propaganda capabilities that can mitigate the coming waves of generative AI propaganda — both at home and with...

Technology & National Security

How to counter China’s scary use of AI tech

In the face of these AI threats, democratic governments and societies need to work to establish global norms for lawful, appropriate and ethical uses of technologies like faci...

Technology & National Security

NOTEWORTHY: DoD Autonomous Weapons Policy

In this CNAS Noteworthy, Vice President and Director of Studies Paul Scharre breaks down the new Directive and what it means for the U.S. military’s approach to lethal autonom...

Technology & National Security

How ‘Killer Robots’ Can Help Us Learn from Mistakes Made in AI Policies

While recent events may have sparked a public outcry over the dangers of “killer robots,” we should not lose sight of the danger that poor processes create when deploying AI s...

Technology & National Security

Decoupling Wastes U.S. Leverage on China

The ability to deny China access to advanced chips is a powerful advantage whose value is growing exponentially...

Technology & National Security

Putting Principles into Practice: How the U.S. Defense Department is Approaching AI

While the application of RAI is still at a nascent stage, the DoD’s continued messaging and prioritization of safe and ethical AI is important....

Technology & National Security

Debunking the AI Arms Race Theory

In 2015, a group of prominent AI and robotics researchers signed an open letter warning of the dangers of autonomous weapons. “The key question for humanity today,” they wrote...

Technology & National Security

AI and International Stability: Risks and Confidence-Building Measures

Exploring the potential use of confidence-building measures built around the shared interests that all countries have in preventing inadvertent war....

CNAS Experts