March 10, 2025
The United States Must Avoid AI’s Chernobyl Moment
In January, U.S President Donald Trump tasked his advisors to develop by July 2025 an AI Action Plan, a roadmap intended to “sustain and enhance America’s AI dominance.” This call to action mirrors the early days of nuclear energy — a transformative technology with world-changing potential but also grave risks. Much like the nuclear industry was derailed by public backlash following disasters such as Three Mile Island and Chernobyl, AI could face a similar crisis of confidence unless policymakers take proactive steps to prevent a large-scale incident.
A single large-scale AI disaster—be it in cybersecurity, critical infrastructure, or biotechnology—could undermine public trust, stall innovation, and leave the United States trailing global competitors. Recent reports indicate plans to cut the government’s AI capacity by dismantling the AI Safety Institute. But this would be a self-inflicted wound—not only for safety, but for progress. If Washington fails to anticipate and mitigate major AI risks, the United States risks falling behind in the fallout from what could become AI’s Chernobyl moment.
The United States cannot let speculative fears trigger heavy-handed regulations that would cripple U.S. AI innovation.
For many Americans, AI’s transformative promise today echoes the optimism around nuclear power in the early 1970s, when more than half of the public supported its expansion. Yet the 1979 accident at Three Mile Island—a partial reactor meltdown—shattered that optimism, with support for nuclear energy dropping precipitously by the mid-1980s. By 1984, nearly two-thirds of Americans opposed the expansion of nuclear energy. Statistical analysis suggests that the Three Mile Island incident was associated with a 72 percent decline in nuclear reactor construction globally. Following the deadlier 1986 Chernobyl incident, countries were more than 90 percent less likely to build nuclear power plants than prior to this accident.
Just as many nations envisioned a renaissance for nuclear energy, the 2011 Fukushima disaster in Japan triggered renewed public skepticism and policy reversals. Fukushima — the only nuclear disaster besides Chernobyl to ever reach the highest classification on the International Nuclear and Radiological Event Scale — caused public support for nuclear energy to plummet around the world. The Japanese government halted all plans for new nuclear reactors. Germany shut down all 17 of its nuclear power generation facilities, ultimately leading to increased dependence on Russian fossil fuels, compromising both its energy security and climate goals. The world is still paying the opportunity cost today: Limited access to clean, reliable nuclear power remains a critical bottleneck for AI development and other energy-intensive innovations.
Read the full article on Just Security.
More from CNAS
-
Defense / Technology & National Security
Stop Obsessing Over AGIWhat’s lacking? Thoughtful, deliberate, and evidence-based deployment and adoption strategies....
By Lt. Gen. Jack Shanahan
-
Technology & National Security
Global Compute and National SecurityExecutive Summary The current pathway to breakthrough artificial intelligence (AI) capabilities relies on amassing and leveraging vast “compute”—specialized chips housed withi...
By Janet Egan
-
Technology & National Security
'The Country Which Throws More Money Into Big Data Centres...': Expert On Why U.S. Is Leading AI RaceWhy is America leading the AI race, despite China's introduction of Deepseek and their hectic efforts to compete? Well, Vivek Chilukuri of the Centre for a New American Securi...
By Vivek Chilukuri
-
Indo-Pacific Security / Technology & National Security
America’s New Plan to Defeat China in the AI Race | Trump | Xi JinpingIn an exclusive HT Podcast video interview, Vivek Chilukuri of the Center for a New American Security gives insight into the rapidly evolving global race for artificial intell...
By Vivek Chilukuri