April 27, 2024
A Blueprint for a Functional China-US Working Group on AI
Late last year U.S. President Joe Biden and China’s leader Xi Jinping met in San Francisco in an attempt to restabilize the relationship after a troubled year. The meeting ended without a concrete agreement on AI, despite rumors of one, but both sides committed to form a working group on AI in the future.
Since then, little progress has been made in defining this working group. Even so, it carries the seeds of potential success. Meaningful talks between the two military, economic, and technology superpowers could even have a galvanizing effect in other international efforts on AI that are currently deadlocked, like the United Nations’ expert group. What is most necessary between the two powers is regularized and structured contact on critical AI safety and stabilities issues. Provided the United States and China can focus on practical considerations, this working group could be a significant source of stability.
Agreements about human control of nuclear weapons could begin a norm that helps prevent the destruction of all life.
Creating regular, repeated, and reliable contact is going to be the most difficult task, but there are steps the working group members can take to ensure this. Both sides should agree to keep the subjects as focused as possible rather than using it as a forum for grandstanding. There are political incentives to air grievances: China will want to complain about U.S. export controls, and the United States will want to examine the use of AI to enable human rights abuses in China. It will be impossible to avoid these topics entirely, but since they will not be solved by the working group, they should not consume the whole conversation. Instead, the group should move quickly from those topics into those where there is a congruence of interests specific to the bilateral relationship.
The Bletchley Declaration, which was one of the results of the U.K. AI Safety Summit that both the United States and China signed, includes some proposals that this working group could build on. In the context of the China-U.S. relationship, sharing AI testing and evaluation standards and procedures would be both beneficial to AI safety and trust as well as politically achievable. The issue is mostly focused on civilian applications of a new technology: testing, evaluation, safety, risk, and transparency standards affect far more civilian users given the rarity of military AI systems. Additionally, removing the security implications of the conversation will create opportunities for a more candid negotiation.
Read the full article from The Diplomat.
More from CNAS
-
Energy, Economics & Security / Technology & National Security
Sharper: Chips and Export ControlsAs competition between the United States and China has intensified, advanced technology has become the latest battlefield. After years of restricting China’s access to advance...
By Charles Horn
-
Technology & National Security
Scaling Laws: The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBrideKeegan McBride, adjunct senior fellow at the Center for a New American Security joins to explore the current state of open source AI model development and associated policy qu...
By Keegan McBride
-
Energy, Economics & Security / Technology & National Security
Export Controls: Janet Egan, Sam Levy, and Peter Harrell on the White House's Semiconductor DecisionJanet Egan, a senior fellow with the Technology and National Security Program at the Center for a New American Security, discussed the Trump administration’s recent decision t...
By Janet Egan
-
Indo-Pacific Security / Technology & National Security
America Should Rent, Not Sell, AI Chips to ChinaSelling AI chips to China outright reduces America's AI lead for little benefit....
By Janet Egan & Lennart Heim