Artificial intelligence has sprawled its way across national news headlines, leading to a range of pressing conversations about the technology and how U.S. policymakers should get ahead of its wide-ranging applications. Should there be an international watchdog for AI? How should the United States regulate cutting-edge AI models? Can America win the AI race? CNAS experts are sharpening the conversation around AI’s role in national security. Continue reading this edition of Sharper to explore their ideas and recommendations.
U.S.-China Competition and Military AI
Two tectonic trends in the international security environment appear to be on a collision course. The first trend is the intensifying geopolitical rivalry between the United States and the People’s Republic of China. The second trend is the rapid development of artificial intelligence (AI) technologies, including for military applications. A new report from authors Jacob Stokes and Alexander Sullivan, with Noah Greene explores the intersection of these trends through five pathways through which military AI could potentially undermine stability and increase strategic risks between the United States and China.
Response to OSTP “National Priorities for Artificial Intelligence Request for Information”
In May 2023, the White House Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI) seeking input on national priorities on AI. CNAS experts submitted the following response, with a particular focus on policy measures for ‘frontier’ AI systems—general-purpose ‘foundation’ models at the frontier of capabilities and risks. To manage national security risks from these models, the response outlines recommendations across the AI lifecycle—from supporting technical AI safety research, through to risk assessments and evaluations around model release—and around the role of regulatory bodies and federal government policy levers.
Frontier AI Regulation: Managing Emerging Risks to Public Safety
"Guardrails are necessary to prevent the pursuit of innovation from imposing excessive negative externalities on society," write Tim Fist, Markus Anderljung, and other authors in an new report published on ArXiv. "There is increasing recognition that government oversight is needed to ensure AI development is carried out responsibly; we hope to contribute to this conversation by exploring regulatory approaches to this end."
Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI
In this edition of Noteworthy, researchers from the AI Safety and Stability team comment on the July 2023 announcement from the White House on securing voluntary commitments on safety, security, and trust from leading AI companies.
AI’s Gatekeepers Aren’t Prepared for What’s Coming
"Until recently, AI has been a diffuse technology that rapidly proliferates," writes Paul Scharre in Foreign Policy. "Open-source AI models are readily available online. The recent shift to large models, such as OpenAI’s ChatGPT, is concentrating power in the hands of large tech companies that can afford the computing hardware needed to train these systems. The balance of global AI power will hinge on whether AI concentrates power in the hands of a few actors, as nuclear weapons did, or proliferates widely, as smartphones have."
How to Win Friends and Choke China’s Chip Supply
"While the United States has been careful to justify the new strategy on national security grounds, the strategy will unquestionably have significant economic impacts on China, which will compound over time as China’s capabilities are frozen in place and lag further and further behind other global producers," argues Emily Kilcrease in War on the Rocks. "It also remains unclear how far the U.S. strategy will extend beyond chips. The administration has identified artificial intelligence, quantum information systems, biotechnology and biomanufacturing, and advanced clean energy technologies as fundamental to U.S. national security. If the United States imposed restrictions across these technology ecosystems comparable to what has just been done for advanced chips production and supercomputing, it would almost certainly lead to a broad technological decoupling from China."
Chinese Firms Are Evading Chip Controls
"Enforcing a ban on Chinese access to leading-edge chips won’t be easy," observe Tim Fist, Jordan Schneider and Lennart Heim in Foreign Policy. "But by establishing a chip inspection program, taking steps to help ensure cloud services are secure, providing the BIS with more funding, and continuing to seek allied cooperation, the U.S. government can erect significant barriers to China using advanced computing to power a new generation of dangerous military applications."
China Is Flirting with AI Catastrophe
"As the world settles into a new era of rivalry—this time between China and the United States—competition over another revolutionary technology, artificial intelligence, has sparked a flurry of military and ethical concerns parallel to those initiated by the nuclear race," argue Bill Drexel and Hannah Kelley in Foreign Affairs. "Those concerns are well worth the attention they are receiving, and more: a world of autonomous weapons and machine-speed war could have devastating consequences for humanity. Beijing’s use of AI tools to help fuel its crimes against humanity against the Uyghur people in Xinjiang already amounts to a catastrophe."
AI Nuclear Weapons Catastrophe Can Be Avoided
"This scenario has ample room for catastrophe—for both the international community and domestic leaders—should any of these weapons malfunction and lead to a nuclear fallout," warns Noah Greene in Lawfare. "As the Soviet-era Col. Petrov case kindly taught us, without a human firmly in control of the nuclear command-and-control structure, the odds of disaster creep slowly toward an unintended or uncontrolled nuclear exchange. An agreement between nuclear powers on this issue led by P5 states would be an important step toward recreating a patchwork of nuclear treaties that has dissolved over the past two decades. To do otherwise would be to flirt with an AI-enabled nuclear arms race."
An AI Challenge: Balancing Open and Closed Systems
"The open vs. closed framework remains imperfect," observes Pablo Chavez in CEPA. "In the AI context, both sides have a legitimate claim to being a better model for safety and security. As noted by Stability AI, an open-source AI company, open models and datasets can help ensure robust oversight; third parties with access can anticipate emerging risks and implement mitigations. But the nature of open-source licensing (which generally makes source code available to all) opens the door to entities or individuals who wish to cause harm or are simply not concerned about or resourced for risk mitigation."
Roles and Implications of AI in the Russian-Ukrainian Conflict
"Artificial Intelligence (AI) is emerging as a significant asset in the ongoing Russian-Ukrainian conflict," writes Samuel Bendett in Russia Matters. "Specifically, it has become a key data analysis tool that helps operators and warfighters make sense of the growing volume and amount of information generated by numerous systems, weapons and soldiers in the field. As AI use continues to evolve, its application on the current Ukrainian and future battlefields will translate into more precise and capable responses to adversary forces, movements and actions."
In the News:
Featuring commentary and analysis from experts including Paul Scharre, Richard Fontaine, Tim Fist, and Lt. Gen. Jack Shanahan.
About the Sharper Series
The CNAS Sharper series features curated analysis and commentary from CNAS experts on the most critical challenges in U.S. foreign policy. From the future of America's relationship with China to the state of U.S. sanctions policy and more, each collection draws on the reports, interviews, and other commentaries produced by experts across the Center to explore how America can strengthen its competitive edge.
Sign up to receive the latest analysis from the CNAS expert community on the most important issues facing America's national security.
More from CNAS
U.S.-China Competition and the Race to 6G
China views telecommunications as central to its geopolitical and strategic objectives....
By Sam Howell
Biden Took the First Step With AI Commitments — Now It’s Congress’ Turn
One of the keys to tackling these risks is developing advanced methods to train effective AI systems while maintaining Americans’ privacy....
By Josh Wallin
Time to Act: Building the Technical and Institutional Foundations for AI Assurance
Assurance for AI systems presents unique challenges....
By Josh Wallin & Andrew Reddie
ChinaTalk: AI Executive Order
Biden just dropped a 50 page executive order that's going to make the world safe for AI, hopefully? To discuss the sprawling EO, ChinaTalk brought on three CNAS analysts, Vive...
By Jordan Schneider, Vivek Chilukuri, Tim Fist & Bill Drexel