April 27, 2024
A Blueprint for a Functional China-US Working Group on AI
Late last year U.S. President Joe Biden and China’s leader Xi Jinping met in San Francisco in an attempt to restabilize the relationship after a troubled year. The meeting ended without a concrete agreement on AI, despite rumors of one, but both sides committed to form a working group on AI in the future.
Since then, little progress has been made in defining this working group. Even so, it carries the seeds of potential success. Meaningful talks between the two military, economic, and technology superpowers could even have a galvanizing effect in other international efforts on AI that are currently deadlocked, like the United Nations’ expert group. What is most necessary between the two powers is regularized and structured contact on critical AI safety and stabilities issues. Provided the United States and China can focus on practical considerations, this working group could be a significant source of stability.
Agreements about human control of nuclear weapons could begin a norm that helps prevent the destruction of all life.
Creating regular, repeated, and reliable contact is going to be the most difficult task, but there are steps the working group members can take to ensure this. Both sides should agree to keep the subjects as focused as possible rather than using it as a forum for grandstanding. There are political incentives to air grievances: China will want to complain about U.S. export controls, and the United States will want to examine the use of AI to enable human rights abuses in China. It will be impossible to avoid these topics entirely, but since they will not be solved by the working group, they should not consume the whole conversation. Instead, the group should move quickly from those topics into those where there is a congruence of interests specific to the bilateral relationship.
The Bletchley Declaration, which was one of the results of the U.K. AI Safety Summit that both the United States and China signed, includes some proposals that this working group could build on. In the context of the China-U.S. relationship, sharing AI testing and evaluation standards and procedures would be both beneficial to AI safety and trust as well as politically achievable. The issue is mostly focused on civilian applications of a new technology: testing, evaluation, safety, risk, and transparency standards affect far more civilian users given the rarity of military AI systems. Additionally, removing the security implications of the conversation will create opportunities for a more candid negotiation.
Read the full article from The Diplomat.
More from CNAS
-
Technology & National Security
Microsoft Announcement Highlights Complicated Relationship Between Big Tech and WarMicrosoft restricted the Israeli military's access to some of its technology after it found that Israel's Defense Ministry was using its services to carry out mass surveillanc...
By Paul Scharre
-
Technology & National Security
Quantum Sensing at Scale: Navigating Commercialization RoadblocksQuantum sensing is racing forward in the lab—but turning prototypes into products still means wrestling with supply chains, certification, and unit economics. In “Quantum Sens...
By Constanza M. Vidal Bustamante
-
Technology & National Security
Constanza Bustamante, Research Fellow at the Center for a New American Security (CNAS) Joins the Superposition Guy’s PodcastConstanza Bustamante, a fellow at the Center for a New American Security (CNAS) is interviewed by Yuval Boger to discuss quantum policy at the nexus of national and economic s...
By Constanza M. Vidal Bustamante
-
Defense / Technology & National Security
Which Technology Offers the Best Defense Against Drones? Lasers or Mobile Gun Trucks?Poland, Denmark, Sweden, Romania and Norway are some of the European countries that have reported -- just this month alone -- drone incursions into their airspace. So far, the...
By Stacie Pettyjohn