December 04, 2025
Look Before We Leap on Artificial Intelligence
This article was originally published on The Dispatch.
A debate about the role that artificial intelligence should and will play in society, and how it will affect humanity for both good and for ill, is currently underway. At the same time, a larger, potentially more consequential debate looms—whether humanity should seek to prevent the ever-advancing capabilities of AI from evolving into artificial general intelligence (AGI), and eventually some form of superintelligence. Some experts believe this step is impossible; others think it imminent.
In reality, no one knows how to define what AGI is, whether it is possible, and if it comes to exist, whether humans can control it or survive its advent. Yet this uncertainty does not absolve policymakers from considering the risks AGI might pose, and taking precautions against worst-case outcomes before it is too late.
Any technology smarter than humans may be capable of breaking free of the control systems humans have designed, allowing it to pursue actions that are not fully aligned with the best interests of humans individually or humanity collectively.
Debates over AGI quickly veer into the philosophical, but practical decisions about how to approach the pursuit of the technology are essential if humanity is to prepare for its possible development. Artificial general intelligence typically refers to an AI system capable of performing a wide range of cognitive tasks at or above human expert levels. Such systems could in theory modify their own algorithms and architectures, enabling a potential acceleration of their capabilities through what is known as recursive self-improvement. This, in turn, could lead to a rapid transition—possibly before human beings even recognize it is happening—into a “superintelligence” system more capable than every human at, well, everything. Such a system could grant its designer enormous power, but could also escape human control entirely.
The concern that this technology might be possible and that whoever achieves it first could gain a strategic and permanent advantage has sparked a new AGI race, creating calls for everything from an AGI “Manhattan Project” to a nuclear deterrent-like policy of military preemption to ensure no one can build such a capability. This split is often cast as a debate between optimists and doomers. But this oversimplistic framing bypasses the more thoughtful and nuanced discussion needed to consider the risks and benefits of AGI, and in turn to weigh what prudent steps we could take now to guard against negative outcomes while preserving the promise of technical advancements. In this, the starting position should be one of humility, because no one knows whether AGI is possible and what its impact on humanity will ultimately look like.
Read the full article on The Dispatch.
More from CNAS
-
Technology & National Security
CNAS Insights | Setting the Rules for AI WarfareThe escalating feud between the Pentagon and Anthropic, one of world’s leading artificial intelligence (AI) companies, highlights a crucial question that will shape security i...
By Paul Scharre
-
Reflecting on Four Years of War in Ukraine
This week marks the four-year anniversary of Russia's full-scale invasion of Ukraine. Brussels Sprouts wanted to mark this somber milestone with a look at the conflict and the...
By Andrea Kendall-Taylor & Jim Townsend
-
Technology & National Security
The Pentagon and Anthropic - NBC’s Meet the Press NowPresident Trump is in Texas speaking about the economy ahead of the state’s high-stakes primary. Retired Lt. Gen. John “Jack” Shanahan, CNAS adjunct senior fellow and former d...
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Fighting AI Cyberattacks Starts with Knowing They’re HappeningThis article was originally published in Lawfare. Anthropic reported in November 2025 that Chinese threat actors used its Claude model to launch widespread cyberattacks on com...
By Janet Egan & Michelle Nie
