December 04, 2025
Look Before We Leap on Artificial Intelligence
This article was originally published on The Dispatch.
A debate about the role that artificial intelligence should and will play in society, and how it will affect humanity for both good and for ill, is currently underway. At the same time, a larger, potentially more consequential debate looms—whether humanity should seek to prevent the ever-advancing capabilities of AI from evolving into artificial general intelligence (AGI), and eventually some form of superintelligence. Some experts believe this step is impossible; others think it imminent.
In reality, no one knows how to define what AGI is, whether it is possible, and if it comes to exist, whether humans can control it or survive its advent. Yet this uncertainty does not absolve policymakers from considering the risks AGI might pose, and taking precautions against worst-case outcomes before it is too late.
Any technology smarter than humans may be capable of breaking free of the control systems humans have designed, allowing it to pursue actions that are not fully aligned with the best interests of humans individually or humanity collectively.
Debates over AGI quickly veer into the philosophical, but practical decisions about how to approach the pursuit of the technology are essential if humanity is to prepare for its possible development. Artificial general intelligence typically refers to an AI system capable of performing a wide range of cognitive tasks at or above human expert levels. Such systems could in theory modify their own algorithms and architectures, enabling a potential acceleration of their capabilities through what is known as recursive self-improvement. This, in turn, could lead to a rapid transition—possibly before human beings even recognize it is happening—into a “superintelligence” system more capable than every human at, well, everything. Such a system could grant its designer enormous power, but could also escape human control entirely.
The concern that this technology might be possible and that whoever achieves it first could gain a strategic and permanent advantage has sparked a new AGI race, creating calls for everything from an AGI “Manhattan Project” to a nuclear deterrent-like policy of military preemption to ensure no one can build such a capability. This split is often cast as a debate between optimists and doomers. But this oversimplistic framing bypasses the more thoughtful and nuanced discussion needed to consider the risks and benefits of AGI, and in turn to weigh what prudent steps we could take now to guard against negative outcomes while preserving the promise of technical advancements. In this, the starting position should be one of humility, because no one knows whether AGI is possible and what its impact on humanity will ultimately look like.
Read the full article on The Dispatch.
More from CNAS
-
Technology & National Security
CNAS Insights | Unpacking the H200 Export PolicyAI Chips for China With two new policies, President Donald Trump has implemented his pledge to allow sales of NVIDIA’s H200 AI chips to China in exchange for a quarter of the ...
By Janet Egan & James Sanders
-
Europe's Defense Dilemma
Since the invasion of Ukraine, European states have taken major steps to rebuild their defense industrial bases, both to supply Ukraine and to rebuild their own militaries. Eu...
By Andrea Kendall-Taylor & Jim Townsend
-
Indo-Pacific Security / Technology & National Security
AI and Policy, Both Foreign and DomesticIn an episode recorded just before Christmas, Darren interviews Janet Egan, Senior Fellow and Deputy Director of the Technology and National Security Program at CNAS, about AI...
By Janet Egan
-
Technology & National Security
Quantum Computing’s Industrial ChallengeThis article was originally published in Just Security. The United States, China, and Europe are preparing to refresh their national quantum programs in 2026, making this a pi...
By Constanza M. Vidal Bustamante
