April 27, 2024
A Blueprint for a Functional China-US Working Group on AI
Late last year U.S. President Joe Biden and China’s leader Xi Jinping met in San Francisco in an attempt to restabilize the relationship after a troubled year. The meeting ended without a concrete agreement on AI, despite rumors of one, but both sides committed to form a working group on AI in the future.
Since then, little progress has been made in defining this working group. Even so, it carries the seeds of potential success. Meaningful talks between the two military, economic, and technology superpowers could even have a galvanizing effect in other international efforts on AI that are currently deadlocked, like the United Nations’ expert group. What is most necessary between the two powers is regularized and structured contact on critical AI safety and stabilities issues. Provided the United States and China can focus on practical considerations, this working group could be a significant source of stability.
Agreements about human control of nuclear weapons could begin a norm that helps prevent the destruction of all life.
Creating regular, repeated, and reliable contact is going to be the most difficult task, but there are steps the working group members can take to ensure this. Both sides should agree to keep the subjects as focused as possible rather than using it as a forum for grandstanding. There are political incentives to air grievances: China will want to complain about U.S. export controls, and the United States will want to examine the use of AI to enable human rights abuses in China. It will be impossible to avoid these topics entirely, but since they will not be solved by the working group, they should not consume the whole conversation. Instead, the group should move quickly from those topics into those where there is a congruence of interests specific to the bilateral relationship.
The Bletchley Declaration, which was one of the results of the U.K. AI Safety Summit that both the United States and China signed, includes some proposals that this working group could build on. In the context of the China-U.S. relationship, sharing AI testing and evaluation standards and procedures would be both beneficial to AI safety and trust as well as politically achievable. The issue is mostly focused on civilian applications of a new technology: testing, evaluation, safety, risk, and transparency standards affect far more civilian users given the rarity of military AI systems. Additionally, removing the security implications of the conversation will create opportunities for a more candid negotiation.
Read the full article from The Diplomat.
More from CNAS
-
Technology & National Security
CNAS Insights | Setting the Rules for AI WarfareThe escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security i...
By Paul Scharre
-
Technology & National Security
The Pentagon and Anthropic - NBC’s Meet the Press NowPresident Trump is in Texas speaking about the economy ahead of the state’s high-stakes primary. Retired Lt. Gen. John “Jack” Shanahan, CNAS adjunct senior fellow and former d...
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Fighting AI Cyberattacks Starts with Knowing They’re HappeningThis article was originally published in Lawfare. Anthropic reported in November 2025 that Chinese threat actors used its Claude model to launch widespread cyberattacks on com...
By Janet Egan & Michelle Nie
-
Technology & National Security
The Sovereignty Gap in U.S. AI StatecraftThis article was originally published in Lawfare. As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Thro...
By Pablo Chavez
