August 21, 2024
Regulating Artificial Intelligence Must Not Undermine NIST’s Integrity
The United States is the global leader in the development of AI and is well-positioned to influence AI’s future trajectories. Decisions made today in the US will have a long-lasting impact, both domestically and globally, on how we build, use, and experience AI. However, recent legislative proposals and executive actions on AI risk entangling the National Institute of Standards and Technology (NIST) in politically charged decisions, potentially calling the organization’s neutrality into question.
This is an outcome that must be prevented. NIST plays a key role in supporting American scientific and economic leadership in AI, and a strong, respected, and politically neutral NIST is a critical component for supporting America’s leadership in technological development and innovation.
A strong NIST will continue to help build standards that are adopted globally and lay the foundation for further American AI innovation and dissemination.
For over a century, NIST has helped advance American commerce, innovation, and global technological leadership. NIST’s experts have developed groundbreaking standards, techniques, tools, and evaluations that have pushed the frontier of measurement science. Today, almost every product or service we interact with has been impacted by the “technology, measurement, and standards provided by the NIST.” More recently, in the context of ongoing global AI competition, NIST has also been active in developing important standards for AI-based systems.
Key to this success has always been NIST’s ability to keep politics away from science, remaining neutral, and focusing on what it does best: measurement science. Now, in the name of AI Safety, many emerging proposals would task NIST with conducting and evaluating AI-based systems themselves. These risks are further compounded by the introduction of an increasingly politicized AI Safety Institute (AISI). Though these points might seem trivial, the long-term implications are significant.
Read the full article from the Tech Policy Press.
More from CNAS
-
Technology & National Security
CNAS Insights | Setting the Rules for AI WarfareThe escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security i...
By Paul Scharre
-
Technology & National Security
The Pentagon and Anthropic - NBC’s Meet the Press NowPresident Trump is in Texas speaking about the economy ahead of the state’s high-stakes primary. Retired Lt. Gen. John “Jack” Shanahan, CNAS adjunct senior fellow and former d...
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Fighting AI Cyberattacks Starts with Knowing They’re HappeningThis article was originally published in Lawfare. Anthropic reported in November 2025 that Chinese threat actors used its Claude model to launch widespread cyberattacks on com...
By Janet Egan & Michelle Nie
-
Technology & National Security
The Sovereignty Gap in U.S. AI StatecraftThis article was originally published in Lawfare. As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Thro...
By Pablo Chavez
