December 04, 2025
Look Before We Leap on Artificial Intelligence
This article was originally published on The Dispatch.
A debate about the role that artificial intelligence should and will play in society, and how it will affect humanity for both good and for ill, is currently underway. At the same time, a larger, potentially more consequential debate looms—whether humanity should seek to prevent the ever-advancing capabilities of AI from evolving into artificial general intelligence (AGI), and eventually some form of superintelligence. Some experts believe this step is impossible; others think it imminent.
In reality, no one knows how to define what AGI is, whether it is possible, and if it comes to exist, whether humans can control it or survive its advent. Yet this uncertainty does not absolve policymakers from considering the risks AGI might pose, and taking precautions against worst-case outcomes before it is too late.
Any technology smarter than humans may be capable of breaking free of the control systems humans have designed, allowing it to pursue actions that are not fully aligned with the best interests of humans individually or humanity collectively.
Debates over AGI quickly veer into the philosophical, but practical decisions about how to approach the pursuit of the technology are essential if humanity is to prepare for its possible development. Artificial general intelligence typically refers to an AI system capable of performing a wide range of cognitive tasks at or above human expert levels. Such systems could in theory modify their own algorithms and architectures, enabling a potential acceleration of their capabilities through what is known as recursive self-improvement. This, in turn, could lead to a rapid transition—possibly before human beings even recognize it is happening—into a “superintelligence” system more capable than every human at, well, everything. Such a system could grant its designer enormous power, but could also escape human control entirely.
The concern that this technology might be possible and that whoever achieves it first could gain a strategic and permanent advantage has sparked a new AGI race, creating calls for everything from an AGI “Manhattan Project” to a nuclear deterrent-like policy of military preemption to ensure no one can build such a capability. This split is often cast as a debate between optimists and doomers. But this oversimplistic framing bypasses the more thoughtful and nuanced discussion needed to consider the risks and benefits of AGI, and in turn to weigh what prudent steps we could take now to guard against negative outcomes while preserving the promise of technical advancements. In this, the starting position should be one of humility, because no one knows whether AGI is possible and what its impact on humanity will ultimately look like.
Read the full article on The Dispatch.
More from CNAS
-
Technology & National Security
CNAS Insights | The Export Control Loophole Fueling China's Chip ProductionThis week, Reuters reported that China has apparently built a prototype of an extreme ultraviolet lithography (EUV) system, a highly intricate machine used to produce cutting-...
By Michelle Nie, Autumn Dorsey & Janet Egan
-
Ukraine Negotiations: Prospects and Pitfalls of Peace
This week Brussels Sprouts breaks down the latest negotiations on Ukraine. American officials told reporters that they had resolved or closed gaps around 90 percent of their d...
By Andrea Kendall-Taylor & Jim Townsend
-
Technology & National Security
Paul Scharre on How AI Could Transform the Nature of WarPaul Scharre, CNAS Executive Vice President, joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasing...
By Paul Scharre
-
Can the Global Order Be Saved? Not Without Punishing Russia
The only way to succeed in the urgent task of achieving a just peace settlement, therefore, is radically reshaping Russia’s calculus....
By Nicholas Lokker
