March 10, 2025
The United States Must Avoid AI’s Chernobyl Moment
In January, U.S President Donald Trump tasked his advisors to develop by July 2025 an AI Action Plan, a roadmap intended to “sustain and enhance America’s AI dominance.” This call to action mirrors the early days of nuclear energy — a transformative technology with world-changing potential but also grave risks. Much like the nuclear industry was derailed by public backlash following disasters such as Three Mile Island and Chernobyl, AI could face a similar crisis of confidence unless policymakers take proactive steps to prevent a large-scale incident.
A single large-scale AI disaster—be it in cybersecurity, critical infrastructure, or biotechnology—could undermine public trust, stall innovation, and leave the United States trailing global competitors. Recent reports indicate plans to cut the government’s AI capacity by dismantling the AI Safety Institute. But this would be a self-inflicted wound—not only for safety, but for progress. If Washington fails to anticipate and mitigate major AI risks, the United States risks falling behind in the fallout from what could become AI’s Chernobyl moment.
The United States cannot let speculative fears trigger heavy-handed regulations that would cripple U.S. AI innovation.
For many Americans, AI’s transformative promise today echoes the optimism around nuclear power in the early 1970s, when more than half of the public supported its expansion. Yet the 1979 accident at Three Mile Island—a partial reactor meltdown—shattered that optimism, with support for nuclear energy dropping precipitously by the mid-1980s. By 1984, nearly two-thirds of Americans opposed the expansion of nuclear energy. Statistical analysis suggests that the Three Mile Island incident was associated with a 72 percent decline in nuclear reactor construction globally. Following the deadlier 1986 Chernobyl incident, countries were more than 90 percent less likely to build nuclear power plants than prior to this accident.
Just as many nations envisioned a renaissance for nuclear energy, the 2011 Fukushima disaster in Japan triggered renewed public skepticism and policy reversals. Fukushima — the only nuclear disaster besides Chernobyl to ever reach the highest classification on the International Nuclear and Radiological Event Scale — caused public support for nuclear energy to plummet around the world. The Japanese government halted all plans for new nuclear reactors. Germany shut down all 17 of its nuclear power generation facilities, ultimately leading to increased dependence on Russian fossil fuels, compromising both its energy security and climate goals. The world is still paying the opportunity cost today: Limited access to clean, reliable nuclear power remains a critical bottleneck for AI development and other energy-intensive innovations.
Read the full article on Just Security.
More from CNAS
-
Five Objectives to Guide U.S. AI Diffusion
The Framework for AI Diffusion (the Framework) is an ambitious proposal to shape the global distribution of critical AI capabilities, maintain U.S. AI leadership, and prevent ...
By Janet Egan & Spencer Michaels
-
Shaping the World’s AI Future: How the U.S. and China Compete to Promote Their Digital Visions
As the United States navigates evolving global AI competition, balancing these elements will be crucial in determining whose AI systems — and by extension, whose approaches, v...
By Keegan McBride
-
America Can Still Lose the AI Race
China is poised to take advantage as countries begin to reduce their reliance on the American tech industry....
By Keegan McBride
-
Siliconsciousness: The AI Competition: Public Policy Strategies: Part 1
This episode comprises the first part of our special event, “The AI Competition: Public Policy Strategies”. The event, co-hosted by MIT Technology Review, brings together some...
By Dr. ED McGrady