November 20, 2025
Prepared, Not Paralyzed
Managing AI Risks to Drive American Leadership
Executive Summary
The Trump administration has embraced a pro-innovation approach to artificial intelligence (AI) policy. Its AI Action Plan, released July 2025, underscores the private sector’s central role in advancing AI breakthroughs and positioning the United States as the world’s leading AI power.1 At the Paris AI Action Summit in February 2025, Vice President JD Vance cautioned that an overly restrictive approach to AI development “would mean paralyzing one of the most promising technologies we have seen in generations.”2
Yet this emphasis on innovation does not diminish the government’s critical role in ensuring national security. On the contrary, AI advances will yield significant threats alongside unprecedented potential in this domain. Experts warn of advanced AI introducing more autonomous cyber weapons, bestowing a broader pool of actors with the know-how to develop biological weapons, and potentially malfunctioning in ways that cause massive damage.3 Private and public sector leaders alike have echoed these concerns.4 The urgent task for policymakers is to ensure that the federal government can anticipate and manage the national security implications of AI with advanced capabilities—without resorting to blunt, ill-targeted, or burdensome regulation that would undermine America’s innovative edge. In other words, the government must prepare at once for potential risks from rapidly advancing AI without imposing onerous regulations that unduly stifle the technology’s vast potential for good.
The status quo is insufficient: Technical expertise in advanced AI remains concentrated in a handful of companies, and the government is playing catch-up. Existing voluntary information-sharing commitments between AI labs and the federal government already face hurdles and likely will prove insufficient over time as the costs of providing transparency increase. Meanwhile, the private sector lacks both the national security expertise and the commercial incentives to manage these risks to the national interest. The United States cannot afford policies built solely on speculative fears. Yet in the face of real and rapid progress in national security–relevant capabilities, neither can it risk allowing an AI-driven disaster or a regulatory vacuum to derail technological progress.
While much of the policy debate rightly focuses on innovation and accelerating adoption, this report concentrates on a less developed but equally vital counterpart: managing the risks that could undermine those ambitions without stifling AI’s innovative potential. Effective risk management is not a brake on progress but a prerequisite for it, playing an essential role in sustaining public trust, preventing setbacks, shaping global standards, and ensuring that American leadership in AI endures over the long term.
Yet the pace of AI progress is accelerating, exacerbating the difficulties of developing evidence-based policy and being responsive to emerging risks and opportunities. The federal government needs to strengthen its ability to manage AI risks without overregulating. It can do this through building three interconnected capacities:
- Situational awareness to detect, analyze, and communicate emerging AI risks and opportunities;
- Agile policymaking that can adapt and scale proportionately to evolving threats; and
- Incident response and readiness to manage and contain significant AI-related incidents should they arise.
The AI Action Plan provides an ambitious foundation for these capacities. But it remains a high-level blueprint, leaving gaps in coverage and open questions around implementation and authorities.
This report makes the case for robust, proactive federal government engagement in AI risk management. It examines the current state of U.S. preparedness, assesses the AI Action Plan’s contributions, identifies persistent shortcomings and gaps, and offers solutions to address them. The report advances the following recommendations for U.S. policymakers:
To establish AI situational awareness in government:
- Empower and equip the Center for AI Standards and Innovation (CAISI) as the federal government’s center of technical AI expertise and evaluation.
- Designate CAISI as the federal government’s interagency lead for AI risks.
- Fund CAISI sufficiently to execute its critical role.
- Strengthen information flows from frontier AI developers to government.
- Congress should pass a bill enacting greater protections for AI whistleblowers.
- The Office of Science and Technology Policy (OSTP) should work with CAISI and other relevant agencies to develop a plan for mandating information sharing and testing for dangerous capabilities, in case voluntary mechanisms prove inadequate.
To bolster policy agility:
- Establish an interagency AI National Security working group, co-led by the OSTP and the National Security Council, to strengthen intragovernment coordination on AI national security risks.
- Prepare contingency planning for AI risk scenarios to allow expedited policy action.
- Establish regular congressional reports by the AI National Security interagency working group to ensure Congress is aware of emerging risks and policy options.
- Work with allies and partners to harmonize policy approaches to identified AI risks.
To strengthen incident response capacity:
- Build stronger interconnectivity between the range of agencies and stakeholders that would need to coordinate a response to an incident.
- Engage AI companies and experts in updating the Cybersecurity and Infrastructure Security Agency incident response playbooks.
- Ensure the AI Information Sharing and Analysis Center also includes representatives from the AI industry.
- Conduct regular tabletop exercises including government, private sector, and nonprofit representatives to bolster connectivity between incident responders across public and private sectors.
- Establish a mechanism for post-incident review and lesson learning.
- Engage with international partners, including adversaries, on best practices for real-time AI incident response.
Introduction
The rapid advancement of artificial intelligence (AI) presents policymakers with the challenge of governing a transformative technology that is both critical to national security and primarily driven by private innovation. The platitude that societies should harness the benefits of AI while managing its risks fails to address the central question: How?5 As AI capabilities continue to advance rapidly, developing a sophisticated answer has become essential for maintaining America’s technological leadership.
The Trump administration has signaled a clear commitment to American AI dominance, championing innovation over restrictive regulation.6 However, this innovation-first approach does not negate the need for risk assessment and preparedness.7 The potential consequences of advanced AI systems—from automated cyberattacks to exacerbated biosecurity risks to potential loss of control—demand serious attention from policymakers and national security experts. Leading researchers have identified scenarios where AI capabilities, if left unmanaged, could pose significant threats to national security, economic stability, and public safety.8 Yet the significant uncertainty about these scenarios necessitates a nimble approach that emphasizes increasing policymaker awareness and speedy response.
Recognizing both the opportunity and urgency of this moment, the White House released the AI Action Plan in July 2025, outlining an ambitious agenda for adopting AI and managing the uncertainty inherent in any rapidly evolving sector. As President Donald Trump outlined at the AI Action Plan’s launch, “This technology brings the potential for bad as well as for good, for peril as well as for progress. . . . We want to have rules, but they have to be smart.”9
The key to smart rules lies in developing evidence-based policies that can adapt to emerging capabilities without stifling innovation. Yet here lies a fundamental tension: AI is evolving far faster than the traditional policy cycle, and most early regulatory proposals inevitably will rely on projections rather than robust evidence. Policymakers must balance the imperative to ground decisions in data with the reality that waiting for comprehensive evidence risks leaving society vulnerable to rapidly emerging threats. For many AI-enabled risks, defaulting to a wait-and-see approach will be woefully inadequate, because effective mitigations will take time to develop. Moreover, waiting for threats to fully materialize before implementing safeguards could result in a catastrophic incident that triggers public backlash and undermines the future of American AI progress—much like how nuclear accidents at Three Mile Island and Chernobyl derailed nuclear energy development for decades.10 The stakes extend beyond domestic security and innovation. If the United States lags in shaping international AI standards and norms, it risks ceding strategic ground to competitors and allowing adversaries to shape the rules of the road for this critical technology.
“This technology brings the potential for bad as well as for good, for peril as well as for progress. . . . We want to have rules, but they have to be smart.”
Prioritizing three key capabilities can help the federal government to balance the tension between evidence and speed and chart a path that promotes both U.S. innovation and security. First, the government needs situational awareness—the ability to monitor emerging AI capabilities and risks and understand their implications for national security and society. Second, the government should increase its policy agility—the ability to adapt and change policy settings quickly, if risks materialize that warrant new approaches. Finally, the government needs robust incident response—the ability to effectively contain damage from AI incidents. Ideally, these capabilities will be mutually reinforcing: Situational awareness of emerging capabilities allows the government to more agilely develop targeted policies, including those that bolster incident preparedness. Lessons learned from incident response can help the government update its situational awareness and inform whether policies and rules need urgent revision.
AI has the potential to revolutionize national defense, accelerate scientific breakthroughs, and strengthen American competitiveness for generations. Realizing this vision requires an environment where innovation can proceed with public confidence. Effective government preparedness plays a role by establishing the conditions under which development can flourish. When policymakers have visibility into emerging capabilities, they can better target evidence-based measures to address genuine risks without imposing sweeping restrictions. When the government can adapt rules agilely, it can better respond to opportunities and risks as technology quickly evolves. When the government has well-practiced, robust incident response frameworks, individual setbacks won’t trigger the kind of panic that could set American AI back years. The three capabilities outlined in this report—situational awareness, policy agility, and incident preparedness—form the foundation of this enabling framework, securing American leadership by creating a stable, trusted environment in which innovation can continue at pace and at scale.
The AI Action Plan outlined important first steps for AI preparedness, establishing a solid blueprint for bolstering the federal government’s ability to understand, analyze, and respond to emerging capabilities and risks. Building on this foundation, the next phase of implementation requires translating the AI Action Plan’s strategic vision into operational reality. Many of the plan’s forward-looking initiatives now need additional authorities, detailed rulemaking, and dedicated resourcing to move from concept to execution. More work is needed to translate the vision of the AI Action Plan into action, plug residual gaps, and position the United States to responsibly lead the world through the AI transition.
This report addresses these challenges and charts a pathway forward to bolster federal government AI preparedness. It outlines why government involvement in AI risk preparedness is critical to national security and U.S. global leadership. The analysis focuses on the three critical capacities the government needs to address AI risks effectively: situational awareness, policy agility, and incident preparedness. Finally, the report highlights gaps in the U.S. AI preparedness ecosystem and recommends measures to address these shortfalls.
- Winning the Race: America’s AI Action Plan (The White House, July 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf. ↩
- JD Vance, “Remarks by the Vice President at the Artificial Intelligence Action Summit in Paris, France,” public event, Paris, France, February 11, 2025, https://www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france. ↩
- Michael Siegel et al., “Rethinking the Cybersecurity Arms Race,” MIT Sloan, April 10, 2025, https://web.archive.org/web/20250425004338/https://cams.mit.edu/wp-content/uploads/Safe-CAMS-MIT-Art; Beatrice Nolan, “OpenAI Warns Its Future Models Will Have a Higher Risk of Aiding Bioweapons Development,” Fortune, June 19, 2025, https://fortune.com/2025/06/19/openai-future-models-higher-risk-aiding-bioweapons-creation/; and Zachary Arnold and Helen Toner, “AI Accidents: An Emerging Threat,” Center for Security and Emerging Technology (CSET), July 2021, https://cset.georgetown.edu/publication/ai-accidents-an-emerging-threat/. ↩
- Michael Kratsios, “Remarks at the Security Council’s Open Debate on Artificial Intelligence and International Peace and Security,” UN Security Council, September 24, 2025, https://usun.usmission.gov/remarks-at-the-security-councils-open-debate-on-artificial-intelligence-and-international-peace-and-security/; Oversight of A.I.: Principles for Regulation: Hearing before the Senate Subcommittee on Privacy, Technology, and the Law, 118th Cong. 7 (2023) (statement of Dario Amodei, CEO, Anthropic), https://www.govinfo.gov/app/details/CHRG-118shrg53503/CHRG-118shrg53503. ↩
- Miles Brundage (@Miles_Brundage), “I’ve received some important intel—AI could pose great risks but *also* could have great benefits.” X (formerly Twitter), May 8, 2025, https://x.com/Miles_Brundage/status/1920546324451049986. ↩
- JD Vance, “Remarks by the Vice President at the Artificial Intelligence Action Summit in Paris, France.” ↩
- Winning the Race: America’s AI Action Plan, 2. ↩
- Kratsios, “Remarks at the Security Council’s Open Debate on Artificial Intelligence and International Peace and Security”; Oversight of A.I.: Principles for Regulation: Hearing Before the Senate Subcommittee on Privacy, Technology, and the Law; Sam Altman, “Machine Intelligence, Part 1,” personal website, February 15, 2015, https://blog.samaltman.com/machine-intelligence-part-1; and Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation: Hearing Before the Senate Committee on Commerce, Science, and Transportation, 119th Cong. 74 (2025) (statement of Sam Altman, CEO, OpenAI), https://www.congress.gov/119/chrg/CHRG-119shrg61426/CHRG-119shrg61426.pdf. ↩
- “Transcript: Donald Trump’s Address at ‘Winning the AI Race’ Event,” Tech Policy Press, July 24, 2025, https://www.techpolicy.press/transcript-donald-trumps-address-at-winning-the-ai-race-event/. ↩
- “First Nuclear Reactors Since 1970s Approved in US,” BBC News, February 9, 2012, https://www.bbc.com/news/world-us-canada-16973865; Janet Egan and Cole Salvador, “The United States Must Avoid AI’s Chernobyl Moment,” Just Security, March 10, 2025, https://www.justsecurity.org/108644/united-states-must-avoid-ais-chernobyl-moment/. ↩
More from CNAS
-
Technology & National Security
Tipping the ScalesThis report examines how emerging AI capabilities could disrupt the cyber offense-defense balance....
By Caleb Withers
-
Technology & National Security
Global Compute and National SecurityThe current pathway to breakthrough artificial intelligence (AI) capabilities relies on amassing and leveraging vast “compute”—specialized chips housed within massive data cen...
By Janet Egan
-
Indo-Pacific Security / Energy, Economics & Security / Technology & National Security
Selling AI Chips Won’t Keep China Hooked on U.S. TechnologyU.S. policy should not rest on the illusion that selling chips can trap China inside the American tech ecosystem....
By Janet Egan
-
Technology & National Security
Scaling Laws: The Open Questions Surrounding Open Source AI with Nathan Lambert and Keegan McBrideKeegan McBride, adjunct senior fellow at the Center for a New American Security joins to explore the current state of open source AI model development and associated policy qu...
By Keegan McBride
