May 30, 2023
An AI Challenge: Balancing Open and Closed Systems
Technology debates are often a tug-of-war between open and closed systems.
On one side, open allows interoperability, customization, and integration with third-party software or hardware. Champions highlight how openness promotes transparency, accountability, competition, and significant innovation. On the other side, defenders of closed argue that they are more stable and secure and better protect their owners’ property interests.
Navigating the spectrum between open and closed is critical to effective artificial intelligence policy. The right balance will promote innovation and competition while managing AI’s significant risks.
Much of AI’s creation and evolution have happened thanks to open-source development and diffusion. Numerous widely-adopted AI open-source projects provide development frameworks and libraries such as PyTorch, TensorFlow, and MXNet, and many companies – including Hugging Face, Stability AI, Nomic AI, and Meta – have released open-source AI models or enable open-source development.
Google and OpenAI have traditionally stood on the side of openness. Both have published AI research and open-source tools. Google, for example, originally developed TensorFlow in-house and later released it as an open-source software library for building AI.
Read the full article from CEPA.
More from CNAS
-
Technology & National Security
CNAS Insights | Setting the Rules for AI WarfareThe escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security i...
By Paul Scharre
-
Technology & National Security
The Pentagon and Anthropic - NBC’s Meet the Press NowPresident Trump is in Texas speaking about the economy ahead of the state’s high-stakes primary. Retired Lt. Gen. John “Jack” Shanahan, CNAS adjunct senior fellow and former d...
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Fighting AI Cyberattacks Starts with Knowing They’re HappeningThis article was originally published in Lawfare. Anthropic reported in November 2025 that Chinese threat actors used its Claude model to launch widespread cyberattacks on com...
By Janet Egan & Michelle Nie
-
Technology & National Security
The Sovereignty Gap in U.S. AI StatecraftThis article was originally published in Lawfare. As the India AI Impact Summit kicks off this week, the Trump administration has embraced the language of “sovereign AI.” Thro...
By Pablo Chavez
