April 06, 2023
Making Unilateral Norms for Military AI Multilateral
The speed and pitfalls of artificial intelligence (AI) development are on public display thanks to the race for dominance among leading AI firms following the public release of ChatGPT. One area where this “arms race” mentality could have grave consequences is in military use of AI, where even simple mistakes could cause escalation, instability, and destruction. In an attempt to mitigate these risks, the State Department released the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The declaration is a good step toward improving the global conversation around AI in military systems. The United States can work with its closest allies to turn this unilateral statement into a multilateral commitment to promote norms for military AI use around the globe.
Without significant effort from the U.S., the political declaration could easily die on the vine, and with it a structure for building AI technology responsibly.
The United States—specifically the Defense Department—has already released policy documents on AI in military affairs, including the Ethical Principles for Artificial Intelligence, Responsible Artificial Intelligence Strategy and Implementation Pathway, and Directive 3000.09 which lay out principles and frameworks for developing autonomous weapons systems. The State Department’s political declaration builds on the accomplishments of these other documents. After a short statement of purpose about the need for ethical and safe AI and the dangers of poorly designed systems, the declaration lays out best practices for responsible AI development. More specifically, it urges states to conduct reviews of AI systems to ensure they comply with international law, build auditable AI systems, work to reduce unintended bias in the technology, maintain acceptable levels of human judgment and training, and test for safety and alignment. For the most part, the best practices are outlined broadly. While some observers may argue in favor of a more narrow approach, this is a strength for a declaration designed to build a normative framework, as many countries should be able to easily agree to these practices.
Read the full article from Lawfare.
More from CNAS
-
Energy, Economics & Security / Technology & National Security
Beyond Bans: Expanding the Policy Options for Tech-Security ThreatsStuck between a rock (the fact that banning all Chinese tech that poses a risk is expensive and impractical) and a hard place (the fact that many existing mitigation proposals...
By Geoffrey Gertz
-
Indo-Pacific Security / Technology & National Security
Cyber Crossroads in the Indo-PacificThe Indo-Pacific faces a cyber crossroads. Down one path lies deeper military, intelligence, and economic ties between Washington and its key allies and partners in this strat...
By Vivek Chilukuri, Lisa Curtis, Janet Egan, Morgan Peirce, Elizabeth Whatcott & Nathaniel Schochet
-
Technology & National Security
Securing America’s AI Future: Federal Research and Development PrioritiesOn April 29, 2025, the White House Office of Science and Technology Policy (OSTP) issued a Request for Information on the Development of a 2025 National Artificial Intelligenc...
By Caleb Withers & Spencer Michaels
-
Middle East Security / Technology & National Security
‘We Want Peace’: How Attacks Between Israel and Iran Could Impact People in NCRetired Lt. Gen. Jack Shanahan is an adjunct senior fellow at the Center for New American Security. Shanahan provided some context on how the two Middle East countries got her...
By Lt. Gen. Jack Shanahan