April 27, 2022

The U.S. and China Need Ground Rules for AI Dangers

The threats are bigger, the stakes are higher, and the level of trust between the United States and China is lower today than it was in 2014, when experts from both countries first began discussing the risks posed by artificial intelligence (AI). At a time when about 9 in 10 U.S. adults consider China to be a “competitor” or “enemy,” calls for Washington and Beijing to cooperate on shared challenges routinely fall on deaf ears. But, as laboratories in both countries continue to unveil dramatic capabilities for AI systems, it is more important than ever that the United States and China take steps to mitigate existential threats posed by AI accidents.

As a technology, AI is profoundly fragile. Even with perfect information and ideal operating circumstances, machine learning systems break easily and perform in ways contrary to their intended function. Since 2017, the Global Partnership on AI has logged “more than 1,200 reports of intelligent systems causing safety, fairness, or other real-world problems,” from autonomous car accidents to racially biased hiring decisions. When the stakes are low, the risk of an AI accident can be tolerable—such as being presented with an uninteresting Netflix recommendation or suboptimal driving route. But in a high-pressure, low-information military environment, both the probability and consequences of AI accidents are bound to increase.

As a technology, AI is profoundly fragile.

Weapon systems put at high alert, for instance, could mistake a routine incident for an attack—and even automatically respond. Some of the Cold War’s most dangerous nuclear warning malfunctions were narrowly avoided because human judgment prevailed. For now, nuclear command and control systems in the United States and China still require that element of human decision-making—but, for instance, shipboard defense systems that might be involved in naval confrontations do not.

Neither side trusts the other on this issue. Over the past six months, I have spoken on a handful of occasions with retired Chinese military leaders about the risks involved with AI systems. They view the U.S. Defense Department’s AI ethics principles and broader approach to “responsible AI” as bad-faith efforts to skirt multilateral negotiations aimed at restricting the development of autonomous weapons. Meanwhile, U.S. observers don’t believe China is serious about those negotiations, given its extraordinarily narrow definition of lethal autonomous weapons systems. (China has called for a ban only on autonomous weapons that cannot be recalled once initiated and which kill with indiscriminate effect.) Both militaries are developing automated target recognition and fire control systems based on AI, and the last substantial discussion among the United Nations Group of Governmental Experts focused on these issues is set to conclude in mid-2022.

Read the full article from Foreign Policy.

  • Commentary
    • Tech Policy Press
    • February 28, 2024
    UK Versus EU: Who Has A Better Policy Approach To AI?

    The EU’s policy in this area is prone to sacrificing innovation for the sake of a hypothetical future....

    By Noah Greene

  • Podcast
    • February 27, 2024
    Paul Scharre on AI 101

    Paul Scharre, Executive Vice President and Director of Studies at CNAS and author of Four Battlegrounds: Power in the Age of Artificial Intelligence, joins the show to talk ab...

    By Paul Scharre

  • Commentary
    • February 21, 2024
    Comments on the Advanced Computing/Supercomputing IFR: Export Control Strategy & Enforcement for AI Chips

    This comment represents the views of the authors alone and not those of their employers.1 The authors commend the Bureau of Industry and Security (BIS) for the Advanced Comput...

    By Erich Grunewald & Tim Fist

  • Reports
    • February 20, 2024
    Biotech Matters

    Operation Warp Speed showed the power of the U.S. government to direct national biotech capabilities around a shared goal—in this case, a novel vaccine. But there are many oth...

    By Hannah Kelley

View All Reports View All Articles & Multimedia