August 21, 2024
Regulating Artificial Intelligence Must Not Undermine NIST’s Integrity
The United States is the global leader in the development of AI and is well-positioned to influence AI’s future trajectories. Decisions made today in the US will have a long-lasting impact, both domestically and globally, on how we build, use, and experience AI. However, recent legislative proposals and executive actions on AI risk entangling the National Institute of Standards and Technology (NIST) in politically charged decisions, potentially calling the organization’s neutrality into question.
This is an outcome that must be prevented. NIST plays a key role in supporting American scientific and economic leadership in AI, and a strong, respected, and politically neutral NIST is a critical component for supporting America’s leadership in technological development and innovation.
A strong NIST will continue to help build standards that are adopted globally and lay the foundation for further American AI innovation and dissemination.
For over a century, NIST has helped advance American commerce, innovation, and global technological leadership. NIST’s experts have developed groundbreaking standards, techniques, tools, and evaluations that have pushed the frontier of measurement science. Today, almost every product or service we interact with has been impacted by the “technology, measurement, and standards provided by the NIST.” More recently, in the context of ongoing global AI competition, NIST has also been active in developing important standards for AI-based systems.
Key to this success has always been NIST’s ability to keep politics away from science, remaining neutral, and focusing on what it does best: measurement science. Now, in the name of AI Safety, many emerging proposals would task NIST with conducting and evaluating AI-based systems themselves. These risks are further compounded by the introduction of an increasingly politicized AI Safety Institute (AISI). Though these points might seem trivial, the long-term implications are significant.
Read the full article from the Tech Policy Press.
More from CNAS
-
Technology & National Security
Anthropic, the Pentagon, and the Future of Autonomous WeaponsThe last big story right before the war in Iran started was the collapse in the relationship between the Pentagon and Anthropic, with the latter objecting to any potential use...
By Paul Scharre
-
Technology & National Security
Off TargetThe pace of progress in frontier artificial intelligence (AI) capabilities shows no sign of slowing. Frontier models offer transformative potential for national security—from ...
By Caleb Withers, Jay Kim & Ethan Chiu
-
Technology & National Security
CNAS Insights | Bridging Washington and Silicon ValleyThe recent friction between Anthropic and the Pentagon has made me reflect on the painful chasm that opened between Washington and Silicon Valley following leaks from Edward S...
By Anne Neuberger
-
Technology & National Security
The Geopolitics of 6G with Vivek Chilukuri, Michael Calabrese, and Lindsay GormanVivek Chilukuri, senior fellow and program director at the Center for a New American Security, joined POLITICO Policy Outlook to discuss the geopolitical implications of 6G, t...
By Vivek Chilukuri
