November 06, 2023

Why China’s Involvement in the U.K. AI Safety Summit Was So Significant

Source: Time

Journalists: Will Henshall, Anna Gordon

The move surprised some observers because concerns about risks posed by advanced AI are less commonly expressed in China than they are in the West, says Bill Drexel, an associate fellow at military affairs think tank the Center for a New American Security, who notes that petitions and demands are not typically well-received by the Chinese authorities. “To see that, alongside a high profile diplomatic summit, is very interesting,” says Drexel. “I wouldn't be surprised if they had already cleared what they were going to do with some officials or something.”


Official sign-off on the paper could signal that Chinese officials are concerned about risks from advanced AI, or it’s possible that participating in AI safety discussion benefits Beijing in other ways, at the very least buying China some time to work on its own AI development, Drexel says.

Vice Minister Zhaohui’s remarks at the opening plenary in the U.K. hinted at emerging tensions between the U.S. and China, says Drexel. First, Zhaohui defended the open release of AI models, an approach that has historically been the norm and from which China benefits but that some in the West are beginning to move away from amid concerns that open-release policies might allow misuse of the most powerful AI models. Second, Zhaohui stated that “all nations have the right to develop and use artificial intelligence technology,” alluding to the U.S. chip export restrictions.

Whether or not these tensions worsen, Drexel believes cooperation between the two countries is unlikely. “You really miss the forest for the trees if you think that the U.S. and China are coming together on AI from this summit,” he says. “The reality is we've declared something close to economic war on China, particularly on artificial intelligence, by not just restricting the export of these ultra-advanced semiconductors, but also then updating the order to make them more restrictive just a few weeks ago.”

Despite these tensions, Robert Trager, co-director of the Oxford Martin AI Governance Initiative argues that the U.S. and China can cooperate on common interests without a transformation of their overall relations, in a similar way to how the U.S. and then Soviet Union agreeing to prevent the spread of nuclear weapons under the Nuclear Non-Proliferation Treaty of 1968. “The non-proliferation regime is a great example of that. No one would say that the U.S. and the Soviet Union had good relations,” says Trager, who is also international governance lead at the Centre for the Governance of AI.

Drexel is less optimistic of such cooperation with China, saying even narrow cooperation on shared issues may prove difficult, as has been the case with diplomacy relating to other global concerns. “You talk to American diplomats, and a very common concern with China is that we try to separate out issues that we think are common concerns, especially climate, but also other tech safety issues like space debris, and biological risks, and so on,” says Drexel. “The perception on the American side, at least, is that China chronically subordinates these kinds of common interest issues to their broader geopolitical maneuvering vis-à-vis America in such a way that's deeply frustrating.”

It’s possible that China would be more likely to cooperate on AI safety if its leaders believed that keeping pace with American AI development was infeasible due to export restrictions. That might incentivise them to push for stricter international safety measures to hinder U.S. AI development, says Drexel.

Read the full story and more from TIME.


  • Bill Drexel

    Fellow, Technology and National Security Program

    Bill Drexel is a Fellow for the Technology and National Security Program at CNAS. His work focuses on Sino-American competition, artificial intelligence, and technology as an ...