March 26, 2024

The Next Step in Military AI Multilateralism

As part of the deluge of new artificial intelligence (AI) policy documents surrounding the AI Safety Summit in November 2023, the United States released a long-awaited update to its Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The most momentous part of the update was the addition of new signatories that joined the United States in agreeing to the principles, including the U.K., Japan, Australia, Singapore, Libya, and the Dominican Republic. This document is not a binding treaty, nor is it a detailed framework for international regulation of military AI. It is, however, a blueprint for a growing consensus on military use of AI, one that will herald a safer and more stable use of this technology in international politics. To further this mission, the United States needs to continue to add more signatories to the declaration and push for consensus on the nuclear norms that were removed from the original version.

Finding a suitable consensus on nuclear weapon controls is a priority, as is broadening the signatories to include the most critical military allies of the United States.

The political declaration is the United States’s attempt to set the tone for the debate on military AI and autonomy. It represents a positive American vision of international AI regulation in the face of numerous calls for an international regime. It also comes at the same time as renewed interest aimed at banning lethal autonomous weapons in the UN General Assembly and the resumption of the Group of Governmental Experts, which will discuss autonomous weapons in more detail. The United States has historically resisted full bans in favor of a softer approach of “responsible use,” which the document attempts to concretely outline in advance of these upcoming conversations.

A first draft of the declaration, released in February 2023 to much fanfare, essentially restated the principles from other U.S. policy documents as protonorms. The document itself was a good blueprint for how to responsibly use AI and autonomy, but its main failing was that the United States was largely speaking alone. Not a single other nation signed on to it publicly. For the United States to be a global AI leader, someone had to be in its camp, and that was, at least publicly, not so when the document was released.

This new version of the declaration seeks to rectify that, and it has been broadly successful by getting more than 50 countries to sign on. The countries signing on to the document have a wide geographic mix: While more than half are European, there are signatories from Africa, Asia, Latin America, and Oceania. They also come from beyond the treaty allies of the United States, representing real outreach.

Read the full article from Lawfare.

View All Reports View All Articles & Multimedia