July 06, 2023
Frontier AI Regulation: Managing Emerging Risks to Public Safety
Responsible AI innovation can provide extraordinary benefits to society, such as delivering medical and legal services to more people at lower cost, enabling scalable personalized education, and contributing solutions to pressing global challenges like climate change and pandemic prevention. However, guardrails are necessary to prevent the pursuit of innovation from imposing excessive negative externalities on society. There is increasing recognition that government oversight is needed to ensure AI development is carried out responsibly; we hope to contribute to this conversation by exploring regulatory approaches to this end.
We think that it is important to begin taking practical steps to regulate frontier AI today, and that the ideas discussed in this paper are a step in that direction.
In this paper, we focus specifically on the regulation of frontier AI models, which we define as highly capable foundation models that could have dangerous capabilities sufficient to pose severe risks to public safety and global security. Examples of such dangerous capabilities include designing new biochemical weapons, producing highly persuasive personalized disinformation, and evading human control.
This article was originally published by Arxiv by authors Markus Anderljung, Joslyn Barnhart, Anton Korinek, Jade Leung, Cullen O'Keefe, Jess Whittlestone, Shahar Avin, Miles Brundage, Justin Bullock, Duncan Cass-Beggs, Ben Chang, Tantum Collins, Tim Fist, Gillian Hadfield, Alan Hayes, Lewis Ho, Sara Hooker, Eric Horvitz, Noam Kolt, Jonas Schuett, Yonadav Shavit, Divya Siddarth, Robert Trager, Kevin Wolf.
More from CNAS
-
Technology & National Security
CNAS Insights | The Export Control Loophole Fueling China's Chip ProductionThis week, Reuters reported that China has apparently built a prototype of an extreme ultraviolet lithography (EUV) system, a highly intricate machine used to produce cutting-...
By Michelle Nie, Autumn Dorsey & Janet Egan
-
Technology & National Security
Paul Scharre on How AI Could Transform the Nature of WarPaul Scharre, CNAS Executive Vice President, joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasing...
By Paul Scharre
-
Energy, Economics & Security / Technology & National Security
Recommendations for Promoting American AI AbroadStrategic Context and Program Objectives The American AI Exports Program is an ambitious and essential proposal to expand the reach of American AI technologies in foreign mar...
By Janet Egan, Geoffrey Gertz, Daniel Remler & Ruby Scanlon
-
Technology & National Security
Defense One Radio, Ep. 200: Paul Scharre Explains the Global AI Arms RacePaul Scharre, executive vice president at the Center for a New American Security, joined Defense One to discuss science and technology. The former Army Ranger explored how AI ...
By Paul Scharre
