
Image credit: Saul Loeb/AFP via Getty Images
Artificial Intelligence Safety and Stability
Nations around the world are investing in artificial intelligence (AI) to improve their military, intelligence, and other national security capabilities. Yet AI technology, at present, has significant safety and security vulnerabilities. AI systems could fail, potentially in unexpected ways, due to a variety of causes. Moreover, the interactive nature of military competition means that one nation’s actions affect others, including in ways that may be detrimental to mutual stability. There is an urgent need to explore actions that can mitigate these risks, such as improved processes for AI assurance, norms and best practices for responsible AI adoption, and confidence-building measures that improve stability among all nations.
The Center for a New American Security (CNAS) Artificial Intelligence Safety and Stability project aims to better understand AI risks and specific steps that can be taken to improve AI safety and stability in national security applications. Major lines of effort include:
- Anticipating, preventing, and mitigating catastrophic AI failures
- Improving Defense Department processes for ensuring safe, secure, and trusted AI
- Understanding and shaping opportunities for compute governance
This cross-program effort includes the CNAS Technology and National Security, Defense, Indo-Pacific Security, Transatlantic Security, and Energy, Economics, and Security programs. CNAS experts will share their findings in public reports and policy briefs with recommendations for policymakers.
This project is made possible with the generous support of Open Philanthropy.
Learn More:
Further Reading:
Putting Principles into Practice: How the U.S. Defense Department is Approaching AI
While the application of RAI is still at a nascent stage, the DoD’s continued messaging and prioritization of safe and ethical AI is important....
Debunking the AI Arms Race Theory
In 2015, a group of prominent AI and robotics researchers signed an open letter warning of the dangers of autonomous weapons. “The key question for humanity today,” they wrote...
AI and International Stability: Risks and Confidence-Building Measures
Exploring the potential use of confidence-building measures built around the shared interests that all countries have in preventing inadvertent war....
CNAS Experts
-
Paul Scharre
Vice President and Director of Studies
-
Martijn Rasser
Senior Fellow and Director, Technology and National Security Program
-
Stacie Pettyjohn
Senior Fellow and Director, Defense Program
-
Andrea Kendall-Taylor
Senior Fellow and Director, Transatlantic Security Program
-
Emily Kilcrease
Senior Fellow and Director, Energy, Economics and Security Program
-
Jacob Stokes
Senior Fellow, Indo-Pacific Security Program
-
Megan Lamberth
Former Associate Fellow, Technology and National Security Program
-
Alexandra Seymour
Associate Fellow, Technology and National Security Program