
July 23, 2025
Noteworthy | America's AI Action Plan
Introduction
The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race. President Trump took decisive steps toward achieving this goal during his first days in office by signing Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” calling for America to retain dominance in this global race and directing the creation of an AI Action Plan.

The U.S. already has clear leadership or leverage at virtually every level of the AI stack, and we should obviously seek to secure and extend this advantage. But words matter, and there's a tension between publicly calling for "dominance" on the one hand, and seeking to "ensure our allies are building on American technology" on the other. We've already seen anxiety in foreign capitals about dependence on U.S. cloud providers. This will surely continue for AI. They worry Washington's weaponization of its economic leverage in the trade domain will continue in the digital one. Partnerships that pull foreign capitals into America's orbit and strike a balance between protecting their sovereignty and sensitive U.S. technology will be more effective than expecting outright dependence.
Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people. AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy—an industrial revolution. It will enable radically new forms of education, media, and communication—an information revolution. And it will enable altogether new intellectual achievements: unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art—a renaissance.
An industrial revolution, an information revolution, and a renaissance—all at once. This is the potential that AI presents. The opportunity that stands before us is both inspiring and humbling. And it is ours to seize, or to lose.
America’s AI Action Plan has three pillars: innovation, infrastructure, and international diplomacy and security. The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so. As Vice President Vance remarked at the Paris AI Action Summit in February, restricting AI development with onerous regulation “would not only unfairly benefit incumbents… it would mean paralyzing one of the most promising technologies we have seen in generations.” That is why President Trump rescinded the Biden Administration’s dangerous actions on day one.
We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to “Build, Baby, Build!”
We need to establish American AI—from our advanced semiconductors to our models to our applications—as the gold standard for AI worldwide and ensure our allies are building on American technology.

The Biden administration's efforts to protect America's AI leadership relied heavily on controlling the export of advanced AI chips, culminating in the January 2025 AI Diffusion Framework that proposed unprecedented rules on compute access globally, which sparked backlash from allies and partners.
President Trump is taking a markedly different approach to AI leadership. His rescission of the Framework and announcement of major AI deals with Gulf states foreshadowed this strategic shift. Rather than focusing primarily on restriction, the new approach actively promotes U.S. AI technology overseas. The administration's AI Action Plan codifies this strategy in Pillar 3, positioning diffusion of the full AI tech stack as a cornerstone of American AI leadership. These initiatives will need to be backed by sustained diplomatic and commercial engagement to rebuild America's reputation as a trusted and reliable technology partner.
Several principles cut across each of these three pillars. First, American workers are central to the Trump Administration’s AI policy. The Administration will ensure that our Nation’s workers and their families gain from the opportunities created in this technological revolution. The AI infrastructure buildout will create high-paying jobs for American workers. And the breakthroughs in medicine, manufacturing, and many other fields that AI will make possible will increase the standard of living for all Americans. AI will improve the lives of Americans by complementing their work—not replacing it.
Second, our AI systems must be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas when users seek factual information or analysis. AI systems are becoming essential tools, profoundly shaping how Americans consume information, but these tools must also be trustworthy.
Finally, we must prevent our advanced technologies from being misused or stolen by malicious actors as well as monitor for emerging and unforeseen risks from AI. Doing so will require constant vigilance.
This Action Plan sets forth clear policy goals for near-term execution by the Federal government. The Action Plan’s objective is to articulate policy recommendations that this Administration can deliver for the American people to achieve the President’s vision of global AI dominance. The AI race is America’s to win, and this Action Plan is our roadmap to victory.
Pillar I: Accelerate AI Innovation
America must have the most powerful AI systems in the world, but we must also lead the world in creative and transformative application of these systems. Achieving these goals requires the Federal government to create the conditions where private-sector-led innovation can flourish.
Remove Red Tape and Onerous Regulation
To maintain global leadership in AI, America’s private sector must be unencumbered by bureaucratic red tape. President Trump has already taken multiple steps toward this goal, including rescinding Biden Executive Order 14110 on AI that foreshadowed an onerous regulatory regime. AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level. The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.
Recommended Policy Actions
- Led by the Office of Science and Technology Policy (OSTP), launch a Request for Information from businesses and the public at large about current Federal regulations that hinder AI innovation and adoption, and work with relevant Federal agencies to take appropriate action.
- Led by the Office of Management and Budget (OMB) and consistent with Executive Order 14192 of January 31, 2025, “Unleashing Prosperity Through Deregulation,” work with all Federal agencies to identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development or deployment.
- Led by OMB, work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.
- Led by the Federal Communications Commission (FCC), evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.
- Review all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation. Furthermore, review all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation.
After Congress stripped out the 10-year moratorium on state-level AI regulation from the "One Big Beautiful Bill," the administration now seeks to leverage AI-related federal funding to achieve a similar effect. The deterrent effect will hinge on how broadly the administration interprets "AI-related." As it pushes aggressively for AI adoption across government, I suspect the funds implicated will only grow with time.
Ensure that Frontier AI Protects Free Speech and American Values
AI systems will play a profound role in how we educate our children, do our jobs, and consume media. It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.
Recommended Policy Actions
- Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.
- Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.

I don't envy the civil servants operationalizing this: ideological bias can be in the eye of the beholder, and First Amendment considerations may complicate things. High-performing LLMs will not be able to sidestep ideologically contested territory—advising on future weather trends, for example, will require models to draw on some understanding of climate science.
The inclusion of "top-down" here is important. LLM behaviors often emerge organically in training, rather than from deliberate developer choices. An LLM may exhibit ideological tendencies simply because they are overrepresented in the most useful training data. As AI systems become more central to our information ecosystem, there will be a growing need for credible approaches to differentiate behaviors driven by ideologically neutral performance goals from those reflecting deliberate ideological steering.
- Led by DOC through NIST’s Center for AI Standards and Innovation (CAISI), conduct research and, as appropriate, publish evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship.

This is a great first step. There should be public evaluations for potential alignment with CCP values. The government should go further and support nascent efforts by companies, nonprofits, and even hobbyists to release those same models with the CCP talking points and censorship fine-tuned out of them. This mirrors a recommendation we submitted to the Trump administration in March as it developed the AI Action Plan.
Encourage Open-Source and Open-Weight AI
Open-source and open-weight AI models are made freely available by developers for anyone in the world to download and modify. Models distributed this way have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider. They also benefit commercial and government adoption of AI because many businesses and governments have sensitive data that they cannot send to closed model vendors. And they are essential for academic research, which often relies on access to the weights and training data of a model to perform scientifically rigorous experiments.
In 2024, a National Telecommunications and Information Administration report similarly recognized that open-weight models offer many benefits. It also acknowledged that "current evidence is not sufficient to definitively determine either that restrictions on ... open-weight models are ... will never be appropriate in the future"—a position arguably not precluded by this Action Plan. But right now, Chinese open-weight models are besting US models in capabilities and usage, and rectifying this should indeed be the current geostrategic priority.
We need to ensure America has leading open models founded on American values. Opensource and open-weight models could become global standards in some areas of business and in academic research worldwide. For that reason, they also have geostrategic value. While the decision of whether and how to release an open or closed model is fundamentally up to the developer, the Federal government should create a supportive environment for open models.
Recommended Policy Actions
- Ensure access to large-scale computing power for startups and academics by improving the financial market for compute. Currently, a company seeking to use large-scale compute must often sign long-term contracts with hyperscalers—far beyond the budgetary reach of most academics and many startups. America has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities. Through collaboration with industry, NIST at DOC, OSTP, and the National Science Foundation’s (NSF) National AI Research Resource (NAIRR) pilot, the Federal government can accelerate the maturation of a healthy financial market for compute.
- Partner with leading technology companies to increase the research community’s access to world-class private sector computing, models, data, and software resources as part of the NAIRR pilot.
- Build the foundations for a lean and sustainable NAIRR operations capability that can connect an increasing number of researchers and educators across the country to critical AI resources.
- Continue to foster the next generation of AI breakthroughs by publishing a new National AI Research and Development (R&D) Strategic Plan, led by OSTP, to guide Federal AI research investments.
- Led by DOC through the National Telecommunications and Information Administration (NTIA), convene stakeholders to help drive adoption of open-source and open-weight models by small and medium-sized businesses.
The NAIRR dates back to a 2020 bill and has enjoyed bipartisan support ever since. It recognizes that building a diverse American AI ecosystem requires democratizing access to compute, which is among the biggest barriers to entry. Otherwise we risk a world of compute "haves" and "have nots" and a few large companies becoming gatekeepers to the AI ecosystem. This problem has only grown worse as global demand for AI compute has driven up costs.
Enable AI Adoption
Today, the bottleneck to harnessing AI’s full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large, established organizations. Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry.
Recommended Policy Actions
- Establish regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data and results. These efforts would be enabled by regulatory agencies such as the Food and Drug Administration (FDA) and the Securities and Exchange Commission (SEC), with support from DOC through its AI evaluation initiatives at NIST.
- Launch several domain-specific efforts (e.g., in healthcare, energy, and agriculture), led by NIST at DOC, to convene a broad range of public, private, and academic stakeholders to accelerate the development and adoption of national standards for AI systems and to measure how much AI increases productivity at realistic tasks in those domains.
- Led by the Department of Defense (DOD) in coordination with the Office of the Director of National Intelligence (ODNI), regularly update joint DOD-Intelligence Community (IC) assessments of the comparative level of adoption of AI tools by the United States, its competitors, and its adversaries’ national security establishments, and establish an approach for continuous adaptation of the DOD and IC’s respective AI adoption initiatives based on these AI net assessments.
- Prioritize, collect, and distribute intelligence on foreign frontier AI projects that may have national security implications, via collaboration between the IC, the Department of Energy (DOE), CAISI at DOC, the National Security Council (NSC), and OSTP.
The U.S. lacks a net assessment capability for emerging technology. In other words, we don't have an ongoing capacity to identify strategic technologies on the horizon and then assess how the US stacks up against competitors. Typically, the intelligence community looks abroad while the Commerce Department looks at home, but rarely do these analyses fuse. This is a major gap.
We have a direct interest in understanding whether we're ahead or behind in AI adoption compared to our adversaries—for instance, in our armed forces, intelligence community, and scientific enterprise. This will help and frankly should be a model for technology net assessment broadly. I wrote about this for Lawfare in 2024.
Empower American Workers in the Age of AI
The Trump Administration supports a worker-first AI agenda. By accelerating productivity and creating entirely new industries, AI can help America build an economy that delivers more pathways to economic opportunity for American workers. But it will also transform how work gets done across all industries and occupations, demanding a serious workforce response to help workers navigate that transition. The Trump Administration has already taken significant steps to lead on this front, including the April 2025 Executive Orders 14277 and 14278, “Advancing Artificial Intelligence Education for American Youth” and “Preparing Americans for High-Paying Skilled Trade Jobs of the Future.” To continue delivering on this vision, the Trump Administration will advance a priority set of actions to expand AI literacy and skills development, continuously evaluate AI’s impact on the labor market, and pilot new innovations to rapidly retrain and help workers thrive in an AI-driven economy.

This language indicates that the Trump administration will take a bit of a different approach to AI workforce development than the Biden administration. The Biden administration focused most of its workforce development efforts on R&D-relevant skills. Biden administration-era initiatives primarily aimed to increase the number of folks with the hard technical skills needed to build and train frontier models and operate fabs. This approach was, and still is, necessary. To stay at the leading edge of AI, America needs more engineers, technicians, and data scientists.
But America also needs informed and educated AI adopters to stay at the forefront of the technology. This is where the Trump administration seems poised to focus its workforce development efforts. The AI Action Plan highlights the importance of bolstering the United States' AI deployment and implementation-relevant skills. If the Biden administration prioritized the skills required to produce cutting edge AI models, the Trump administration will prioritize the skills required to successfully integrate those models throughout society.
The Trump administration's approach makes sense given the rapid pace of AI advancement. AI is no longer confined to research labs—it's now present in many facets of everyday life and accessible to nearly all people. Leading this next AI wave requires a citizenry that knows how to use the technology responsibly and is prepared to harness its full potential.
Recommended Policy Actions
- Led by the Department of Labor (DOL), the Department of Education (ED), NSF, and DOC, prioritize AI skill development as a core objective of relevant education and workforce funding streams. This should include promoting the integration of AI skill development into relevant programs, including career and technical education (CTE), workforce training, apprenticeships, and other federally supported skills initiatives.
- Led by the Department of the Treasury, issue guidance clarifying that many AI literacy and AI skill development programs may qualify as eligible educational assistance under Section 132 of the Internal Revenue Code, given AI’s widespread impact reshaping the tasks and skills required across industries and occupations. In applicable situations, this will enable employers to offer tax-free reimbursement for AI-related training and help scale private-sector investment in AI skill development, preserving jobs for American workers.
Section 132 of the tax code clarifies which employer fringe benefits are tax-exempt. This is a smart and targeted way to promote AI literacy and skilling across the economy, helping employers view these offerings as essential, just as many now do for cybersecurity training.
Invest in AI-Enabled Science
Like many other domains, science itself will be transformed by AI. AI systems can already generate models of protein structures, novel materials, and much else. Increasingly powerful general-purpose models show promise in formulating hypotheses and designing experiments. These nascent capabilities promise to accelerate scientific advancement. They will only do so, however, with critical changes in the way science is conducted, including the enabling scientific infrastructure. AI-enabled predictions are of little use if scientists cannot also increase the scale of experimentation. Basic science today is often a labor-intensive process; the AI era will require more scientific and engineering research to transform theories into industrial-scale enterprises. This, in turn, will necessitate new infrastructure and support of new kinds of scientific organizations.
Recommended Policy Actions
- Through NSF, DOE, NIST at DOC, and other Federal partners, invest in automated cloud-enabled labs for a range of scientific fields, including engineering, materials science, chemistry, biology, and neuroscience, built by, as appropriate, the private sector, Federal agencies, and research institutions in coordination and collaboration with DOE National Laboratories.
- Use long-term agreements to support Focused-Research Organizations or other similar entities using AI and other emerging technologies to make fundamental scientific advancements.
- Incentivize researchers to release more high-quality datasets publicly by considering the impact of scientific and engineering datasets from a researchers’ prior funded efforts in the review of proposals for new projects.
- Require federally funded researchers to disclose non-proprietary, non-sensitive datasets that are used by AI models during the course of research and experimentation.
This seems like a smart way to leverage massive federal science and R&D funding to boost the availability of high-quality data, the fuel for AI progress. An issue, of course, is that high-quality, publicly available data isn't just available to U.S. AI labs.
Advance the Science of AI
Just as LLMs and generative AI systems represented a paradigm shift in the science of AI, future breakthroughs may similarly transform what is possible with AI. It is imperative that the United States remain the leading pioneer of such breakthroughs, and this begins with strategic, targeted investment in the most promising paths at the frontier.
Recommended Policy Actions
- Prioritize investment in theoretical, computational, and experimental research to preserve America’s leadership in discovering new and transformative paradigms that advance the capabilities of AI, reflecting this priority in the forthcoming National AI R&D Strategic Plan.

After advanced AI chips, China’s biggest shortcoming relative to the United States is its limited capacity for high-quality basic research. This gap exists largely because of robust U.S. federal funding for science. Public investment fills what private capital avoids: long-horizon, uncertain-payoff research with broad public value. It's good to see the focus on foundational research in the Action Plan—the thing to watch now will be follow through in funding levels and appropriate incentivization of high-risk, high reward research.
Invest in AI Interpretability, Control, and Robustness Breakthroughs
Today, the inner workings of frontier AI systems are poorly understood. Technologists know how LLMs work at a high level, but often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system. This lack of predictability, in turn, can make it challenging to use advanced AI in defense, national security, or other applications where lives are at stake. The United States will be better able to use AI systems to their fullest potential in high-stakes national security domains if we make fundamental breakthroughs on these research problems.
Recommended Policy Actions
- Launch a technology development program led by the Defense Advanced Research Projects Agency in collaboration with CAISI at DOC and NSF, to advance AI interpretability, AI control systems, and adversarial robustness.
- Prioritize fundamental advancements in AI interpretability, control, and robustness as part of the forthcoming National AI R&D Strategic Plan.
- The DOD, DOE, CAISI at DOC, the Department of Homeland Security (DHS), NSF, and academic partners should coordinate an AI hackathon initiative to solicit the best and brightest from U.S. academia to test AI systems for transparency, effectiveness, use control, and security vulnerabilities.
Frontier AI systems offer enormous potential for analyzing data at speed and scale across cyber, surveillance, military, and intelligence operations—if they are reliable. It is great to see the Action Plan recognize that the national security enterprise cannot be a passive consumer of emerging AI capabilities: it has a crucial role to play in sending a demand signal for interpretable, controllable, and robust systems.
Accelerate AI Adoption in Government
With AI tools in use, the Federal government can serve the public with far greater efficiency and effectiveness. Use cases include accelerating slow and often manual internal processes, streamlining public interactions, and many others. Taken together, transformative use of AI can help deliver the highly responsive government the American people expect and deserve.
OMB has already advanced AI adoption in government by reducing onerous rules imposed by the Biden Administration., Now is the time to build on this success.

It would be quite interesting to see how AI tools could be used to enhance enforcement of U.S. policy tools, such as export controls and sanctions. The creakiness of current databases and IT systems in key economic security functions is a well-known problem, yet no administration yet has been able to pull together the budgetary resources, technical knowhow, and systems thinking to implement the necessary transformation. Kudos to the Trump team if they can finally pull this off.
Recommended Policy Actions
- Formalize the Chief Artificial Intelligence Officer Council (CAIOC) as the primary venue for interagency coordination and collaboration on AI adoption. Through the CAIOC, initiate strategic coordination and collaboration with relevant Federal executive councils, to include: the President’s Management Council, Chief Data Officer Council, Chief Information Officer Council, Interagency Council on Statistical Policy, Chief Human Capital Officer Council, and Federal Privacy Council.
- Create a talent-exchange program designed to allow rapid details of Federal staff to other agencies in need of specialized AI talent (e.g., data scientists and software engineers), with input from the Office of Personnel Management.
Meta's recent hiring spree underscores that top AI talent commands a premium and its shortage can be just as much of a bottleneck as compute or capital. The private sector will usually outcompete the government for top talent. While the government may not be training frontier models, the federal government needs trained data scientists and software engineers to realize the AI ambitions in this plan. It is good to see the government proactively thinking about how to best take advantage of its existing talent.
- Create an AI procurement toolbox managed by the General Services Administration (GSA), in coordination with OMB, that facilitates uniformity across the Federal enterprise to the greatest extent practicable. This system would allow any Federal agency to easily choose among multiple models in a manner compliant with relevant privacy, data governance, and transparency laws. Agencies should also have ample flexibility to customize models to their own ends, as well as to see a catalog of other agency AI uses (based on OMB’s pre-existing AI Use Case Inventory).
- Implement an Advanced Technology Transfer and Capability Sharing Program with GSA to quickly transfer advanced AI capabilities and use cases between agencies.
- Mandate that all Federal agencies ensure—to the maximum extent practicable—that all employees whose work could benefit from access to frontier language models have access to, and appropriate training for, such tools.
- Convene, under the auspices of OMB, a cohort of agencies with High Impact Service Providers to pilot and increase the use of AI to improve the delivery of services to the public.
Protect Commercial and Government AI Innovations
Maintaining American leadership in AI necessitates that the U.S. government work closely with industry to appropriately balance the dissemination of cutting-edge AI technologies with national security concerns. It is also essential for the U.S. government to effectively address security risks to American AI companies, talent, intellectual property, and systems.
Recommended Policy Actions
- Led by DOD, DHS, CAISI at DOC, and other appropriate members of the IC, collaborate with leading American AI developers to enable the private sector to actively protect AI innovations from security risks, including malicious cyber actors, insider threats, and others.
The costs of training an AI model at the technological frontier continue to escalate. As America strengthens its lead, nation-state adversaries will be increasingly incentivized to acquire these technologies through other means. Adequate cyber, physical, and personnel security are essential to prevent theft of sensitive intellectual property such as AI model weights—the core parameters that determine an AI system's capabilities—and to ensure systems are protected from sabotage.
The private sector alone does not possess the requisite threat intelligence and national security insights to take proportionate action against sophisticated adversaries. This action will play a key role in addressing this capability gap (though in the longer-term, mandatory requirements may ultimately be needed). It will be important that this support extends to AI data center owners and operators, as well as AI labs themselves.
Pillar II: Build American AI Infrastructure
AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today. American energy capacity has stagnated since the 1970s while China has rapidly built out their grid. America’s path to AI dominance depends on changing this troubling trend.
Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities, and Energy Infrastructure while Guaranteeing Security
Like most general-purpose technologies of the past, AI will require new infrastructure— factories to produce chips, data centers to run those chips, and new sources of energy to power it all. America’s environmental permitting system and other regulations make it almost impossible to build this infrastructure in the United States with the speed that is required. Additionally, this infrastructure must also not be built with any adversarial technology that could undermine U.S. AI dominance.
The administration will need to consider the potential impact of its trade policy on U.S. AI infrastructure build-out, and balance the desire to onshore manufacturing of chips and other critical inputs with the need for speed when it comes to the AI build-out. Addressing unnecessary red tape on permitting is helpful, but could be counteracted by new barriers to data center build-out created by tariffs.
Fortunately, the Trump Administration has made unprecedented progress in reforming this system. Since taking office, President Trump has already reformed National Environmental Policy Act (NEPA) regulations across almost every relevant Federal agency, jumpstarted a permitting technology modernization program, created the National Energy Dominance Council (NEDC), and launched the United States Investment Accelerator. Now is the time to build on that momentum.
Recommended Policy Actions
- Establish new Categorical Exclusions under NEPA to cover data center-related actions that normally do not have a significant effect on the environment. Where possible, adopt Categorical Exclusions already established by other agencies so that each relevant agency can proceed with maximum efficiency.
- Expand the use of the FAST-41 process to cover all data center and data center energy projects eligible under the Fixing America’s Surface Transportation Act of 2015.
- Explore the need for a nationwide Clean Water Act Section 404 permit for data centers, and, if adopted, ensure that this permit does not require a Pre-Construction Notification and covers development sites consistent with the size of a modern AI data center.
- Expedite environmental permitting by streamlining or reducing regulations promulgated under the Clean Air Act, the Clean Water Act, the Comprehensive Environmental Response, Compensation, and Liability Act, and other relevant related laws.
Not surprised to see the focus on energy here, given it's a significant bottleneck to the development of AI infrastructure in the U.S. The energy requirements for frontier AI training continue to grow. Anthropic estimates it will need 5 gigawatts to develop a single frontier AI model in 2028, with the total U.S. AI sector needing at least 50 gigawatts by the same year. This is equivalent to around 24 Hoover Dams' worth of energy capacity. With China rapidly building out its own energy infrastructure, this is emerging as another critical field of AI competition.
However, the administration is approaching the limits of what executive action alone can achieve. While initiatives to streamline permitting and reduce regulatory delays are a positive step, major energy infrastructure projects still face the risk of prolonged litigation, which could significantly delay deployment timelines. The administration should carefully monitor progress here, and may wish to also explore opportunities for joint development with close allies.
- Make Federal lands available for data center construction and the construction of power generation infrastructure for those data centers by directing agencies with significant land portfolios to identify sites suited to large-scale development.
- Maintain security guardrails to prohibit adversaries from inserting sensitive inputs to this infrastructure. Ensure that the domestic AI computing stack is built on American products and that the infrastructure that supports AI development such as energy and telecommunications are free from foreign adversary information and communications technology and services (ICTS)—including software and relevant hardware.
- Expand efforts to apply AI to accelerate and improve environmental reviews, such as through expanding the number of agencies participating in DOE’s PermitAI project.
Under a 2019 Executive Order, the the Department of Commerce has the authority to block or mitigate the import or use of ICTS from foreign adversaries. This signals that a long-awaited rulemaking on how such restrictions should apply to data centers may be coming soon.
Restore American Semiconductor Manufacturing
America jump-started modern technology with the invention of the semiconductor. Now America must bring semiconductor manufacturing back to U.S. soil. A revitalized U.S. chip industry will generate thousands of high-paying jobs, reinforce our technological leadership, and protect our supply chains from disruption by foreign rivals. The Trump Administration will lead that revitalization without making bad deals for the American taxpayer or saddling companies with sweeping ideological agendas.
Recommended Policy Actions
In a notable omission, there is no mention here of using tariffs to incentivize reshoring of American chip manufacturing. This is particularly curious in the context of the administration's intense focus on trade overall and its inclination to use tariffs as a policy of first instance for a wide range of policy problems. Plus, the Department of Commerce has an active trade investigation to determine whether to impose new tariffs on chips, tooling, and derivative products in an effort to promote reshoring.
- Led by DOC’s revamped CHIPS Program Office, continue focusing on delivering a strong return on investment for the American taxpayer and removing all extraneous policy requirements for CHIPS-funded semiconductor manufacturing projects. DOC and other relevant Federal agencies should also collaborate to streamline regulations that slow semiconductor manufacturing efforts.
- Led by DOC, review semiconductor grant and research programs to ensure that they accelerate the integration of advanced AI tools into semiconductor manufacturing.
It's a relief to see the Trump admin backing off earlier threats to the DOC's CHIPS Program Office, to instead leveraging a "revamped" office focused on delivering strong returns from CHIPS Act investments.
The DOC's CHIPS Program Office is already catalyzing an unprecedented manufacturing investment boom. In just four years, the U.S. has attracted nearly $450 billion in planned investments—more than the previous three decades combined.
From here, the CHIPS Program Office should (1) deepen its investment in workforce development (the U.S. faces a projected shortfall of 67,000 workers in the semiconductor industry); and (2) streamline permitting and regulatory barriers that currently delay fab construction.
Train a Skilled Workforce for AI Infrastructure
To build the infrastructure needed to power America’s AI future, we must also invest in the workforce that will build, operate, and maintain it—including roles such as electricians, advanced HVAC technicians, and a host of other high-paying occupations. To address the shortages in many of these critical jobs, the Trump Administration should identify the priority roles that underpin AI infrastructure, develop modern skills frameworks, support industry driven training, and expand early pipelines through general education, CTE, and Registered Apprenticeships to fuel American AI leadership.
Recommended Policy Actions
- Led by DOL and DOC, create a national initiative to identify high-priority occupations essential to the buildout of AI-related infrastructure. This effort would convene employers, industry groups, and other workforce stakeholders to develop or identify national skill frameworks and competency models for these roles. These frameworks would provide voluntary guidance that may inform curriculum design, credential development, and alignment of workforce investments.
- Through DOL, DOE, ED, NSF, and DOC, partner with state and local governments and workforce system stakeholders to support the creation of industry-driven training programs that address workforce needs tied to priority AI infrastructure occupations. These programs should be co-developed by employers and training partners to ensure individuals who complete the program are job-ready and directly connected to the hiring process. Models could also be explored that incentivize employer upskilling of incumbent workers into priority occupations. DOC should integrate these training models as a core workforce component of its infrastructure investment programs. Funding for this strategy will be prioritized based on a program’s ability to address identified pipeline gaps and deliver talent outcomes aligned to employer demand.
- Led by DOL, ED, and NSF, partner with education and workforce system stakeholders to expand early career exposure programs and pre-apprenticeships that engage middle and high school students in priority AI infrastructure occupations. These efforts should focus on creating awareness and excitement about these jobs, aligning with local employer needs, and providing on-ramps into high-quality training and Registered Apprenticeship programs.
The quantum technology industry provides some strong examples of middle and high school student engagement that could be modeled to increase students' exposure to and interest in AI infrastructure occupations.
The Pathways to Quantum Summer Immersion Program, for instance, is a three-week long hybrid program that offers job shadowing opportunities, virtual courses, and site visits to rising high school seniors, helping them learn the fundamentals of quantum science and explore careers in the quantum field. Qubit by Qubit's National High School Research Program in Quantum Computing is another successful initiative that exposes young people to quantum concepts and develops their technical and research skills. Similar hands-on, experiential learning opportunities would help middle and high school students understand the breadth of AI infrastructure occupations available and equip them with the resources needed to pursue those critical jobs.
- Through the ED Office of Career, Technical, and Adult Education, provide guidance to state and local CTE systems about how to update programs of study to align with priority AI infrastructure occupations. This includes refreshing curriculum, expanding dual enrollment options, and strengthening connections between CTE programs, employers, and training providers serving AI infrastructure occupations.
- Led by DOL, expand the use of Registered Apprenticeships in occupations critical to AI infrastructure. Efforts should focus on streamlining the launch of new programs in priority industries and occupations and removing barriers to employer adoption, including simplifying registration, supporting intermediaries, and aligning program design with employer needs.
- Led by DOE, expand the hands-on research training and development opportunities for undergraduate, graduate, and postgraduate students and educators, leveraging expertise and capabilities in AI across its national laboratories. This should include partnering with community colleges and technical/career colleges to prepare new workers and help transition the existing workforce to fill critical AI roles.
Promote Mature Federal Capacity for AI Incident Response
The proliferation of AI technologies means that prudent planning is required to ensure that, if systems fail, the impacts to critical services or infrastructure are minimized and response is imminent. To prepare for such an eventuality, the U.S. government should promote the development and incorporation of AI Incident Response actions into existing incident response doctrine and best-practices for both the public and private sectors.
Great to see the administration increasing its preparedness and readiness to manage AI risks and incidents. We learned this lesson with nuclear energy: a couple of major incidents undermined decades of progress. With AI, polling shows that the social license is already fragile. We cannot afford for a large-scale AI incident to undermine public trust, stall innovation and undermine U.S. AI leadership.
Recommended Policy Actions
- Led by NIST at DOC, including CAISI, partner with the AI and cybersecurity industries to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities (e.g., fly-away kits) of incident response teams.
- Modify the Cybersecurity and Infrastructure Security Agency’s Cybersecurity Incident & Vulnerability Response Playbooks to incorporate considerations for AI systems and to include requirements for Chief Information Security Officers to consult with Chief AI Officers, Senior Agency Officials for Privacy, CAISI at DOC, and other agency officials as appropriate. Agencies should update their subordinate playbooks accordingly.
- Led by DOD, DHS, and ODNI, in coordination with OSTP, NSC, OMB, and the Office of the National Cyber Director, encourage the responsible sharing of AI vulnerability information as part of ongoing efforts to implement Executive Order 14306, “Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity and Amending Executive Order 13694 and Executive Order 14144.”

These actions are welcome but insufficient to prepare for an AI-related cyber contingency. To move from reactive coordination to proactive preparedness, the administration should include public-private operational collaboration as a core component of bolstering incident response. Public-private cyber partnerships have largely focused on information-sharing rather than pre-incident joint planning and training. This must change to prepare for highly capable cyber adversaries intent on exploiting vulnerabilities in AI systems.
To accomplish this, the federal government should work with industry to establish joint incident response playbooks and conduct regular capacity-building exercises that include AI-specific scenarios. Operational integration like this will enable federal cyber teams, such as the teams at the Department of Homeland Security and the Federal Bureau of Investigation, to deploy in a coordinated and efficient manner with the private sector when an incident occurs.
Pillar III: Lead in International AI Diplomacy and Security
To succeed in the global AI competition, America must do more than promote AI within its own borders. The United States must also drive adoption of American AI systems, computing hardware, and standards throughout the world. America currently is the global leader on data center construction, computing hardware performance, and models. It is imperative that the United States leverage this advantage into an enduring global alliance, while preventing our adversaries from free-riding on our innovation and investment.
The United States should absolutely seek to build an enduring global alliance on AI. But AI diplomacy does not exist in a geopolitical vacuum, and some of the administration's other policy actions that have rankled partners and allies - particularly on trade policy - have raised questions on whether the United States is a trusted partner, and will make this effort more difficult.
Export American AI to Allies and Partners
The United States must meet global demand for AI by exporting its full AI technology stack— hardware, models, software, applications, and standards—to all countries willing to join America’s AI alliance. A failure to meet this demand would be an unforced error, causing these countries to turn to our rivals. The distribution and diffusion of American technology will stop our strategic rivals from making our allies dependent on foreign adversary technology.
The focus on exporting the "full AI technology stack" is critical. The U.S. AI sector continues to concentrate on advancing frontier capabilities in pursuit of artificial general intelligence (AGI). This has led to a notable underinvestment in domain-specific, real-world applications. Ironically, it is precisely these practical applications that are most attractive to international partners seeking to harness AI for immediate developmental and economic gains. This imbalance risks creating a vacuum that China may fill by offering near-term, applied AI solutions tailored to partner countries’ needs.
Focusing on providing full stack AI solutions, as opposed to just the raw AI chips, will bring other countries closer into the U.S. technology ecosystem. Moreover, it would also reduce the incentive for nations to undertake the costly and complex task of building sovereign AI capabilities from scratch, reducing the likelihood of alternative AI ecosystems.
Recommended Policy Actions
- Establish and operationalize a program within DOC aimed at gathering proposals from industry consortia for full-stack AI export packages. Once consortia are selected by DOC, the Economic Diplomacy Action Group, the U.S. Trade and Development Agency, the Export-Import Bank, the U.S. International Development Finance Corporation, and the Department of State (DOS) should coordinate with DOC to facilitate deals that meet U.S.-approved security requirements and standards.
In the global transition to 4G and 5G networks, Chinese technology companies like Huawei often beat out Western competitors because they offered bundled packages of financing, hardware, services, and ongoing support. By experimenting with "full-stack AI export packages," the administration has not only internalized this lesson but signaled its interest in combining U.S. "promote" tools in a more focused and coherent way. This is overdue, and the administration should consider this not only for AI but U.S. tech offerings broadly. CNAS experts have written about the needed shift in U.S. technology statecraft in our project on Countering the Digital Silk Road, which has case studies on Indonesia, Brazil, and Kenya.
Counter Chinese Influence in International Governance Bodies
A large number of international bodies, including the United Nations, the Organisation for Economic Co-operation and Development, G7, G20, International Telecommunication Union, Internet Corporation for Assigned Names and Numbers, and others have proposed AI governance frameworks and AI development strategies. The United States supports likeminded nations working together to encourage the development of AI in line with our shared values. But too many of these efforts have advocated for burdensome regulations, vague “codes of conduct” that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance.
Beyond the tonal shift, this has effectively been the U.S. position for some time. The United States is generally skeptical of bodies like the United Nations creating enforceable rules for AI development or use, especially in discussions about lethal autonomous weapons. With that said, the United States has proposed many "vague codes of conduct" itself, such as the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.
This seems to represent less of a policy shift than a rhetorical doubling down on skepticism of multilateral AI governance.
Strengthen AI Compute Export Control Enforcement
Advanced AI compute is essential to the AI era, enabling both economic dynamism and novel military capabilities. Denying our foreign adversaries access to this resource, then, is a matter of both geostrategic competition and national security. Therefore, we should pursue creative approaches to export control enforcement.
Chip smuggling appears to be a growing problem, and creative solutions like location verification are indeed underexplored. Last month, CNAS released the working paper Countering AI Chip Smuggling Has Become a National Security Priority: An Updated Playbook for Preventing AI Chip Smuggling to the PRC.
Recommended Policy Actions
- Led by DOC, OSTP, and NSC in collaboration with industry, explore leveraging new and existing location verification features on advanced AI compute to ensure that the chips are not in countries of concern.
- Establish a new effort led by DOC to collaborate with IC officials on global chip export control enforcement. This would include monitoring emerging technology developments in AI compute to ensure full coverage of possible countries or regions where chips are being diverted. This enhanced monitoring could then be used to expand and increase end-use monitoring in countries where there is a high risk of diversion of advanced, U.S.-origin AI compute, especially where there is not a Bureau of Industry and Security Export Control Officer present in-country.
Investing in greater BIS enforcement capabilities, including through further collaboration with the IC, is a great idea. The administration should continue to push Congress to increase BIS' budget to deliver on these priorities.
Plug Loopholes in Existing Semiconductor Manufacturing Export Controls
Semiconductors are among the most complex inventions ever conceived by man. America and its close allies hold near-monopolies on many critical components and processes in the semiconductor manufacturing pipeline. We must continue to lead the world with pathbreaking research and new inventions in semiconductor manufacturing, but the United States must also prevent our adversaries from using our innovations to their own ends in ways that undermine our national security. This requires new measures to address gaps in semiconductor manufacturing export controls, coupled with enhanced enforcement.
Recommended Policy Actions
- Led by DOC, develop new export controls on semiconductor manufacturing subsystems. Currently, the United States and its allies impose export controls on major systems necessary for semiconductor manufacturing, but do not control many of the component sub-systems.
The U.S. should absolutely close genuine loopholes in semiconductor manufacturing equipment (SME) export controls. Expanding controls to include component subsystems is a logical next step to limit China’s ability to indigenize advanced chip production—but this must be done carefully, precisely, and multilaterally.
The DOC should undertake a detailed study to identify which component subsystems—where the U.S. and its allies hold relative monopolies—are contributing to efforts such as China’s $43 billion drive to develop domestic EUV lithography systems. But further controls must be implemented cautiously. New restrictions must balance the goal of curbing China’s capabilities with preserving U.S. leadership in semiconductor manufacturing. If implemented bluntly, they risk undercutting U.S. and allied firms' ability to gain critical market share and revenue that fuels reinvestment in advanced R&D.
Additionally, any new controls on semiconductor subsystems must be coordinated with key producer countries. If foreign competitors can still sell similar products without restriction, U.S. firms will lose market share and revenue that fuels reinvestment in advanced R&D. It's a good thing that this recommendation prioritizes aligning any export control measures globally. Export controls work best when they are narrow, enforceable, and aligned with partners and allies. Broad, unilateral restrictions that outpace global consensus may end up strengthening competitors—and undermining the very goals they intend to serve.
Align Protection Measures Globally
America must impose strong export controls on sensitive technologies. We should encourage partners and allies to follow U.S. controls, and not backfill. If they do, America should use tools such as the Foreign Direct Product Rule and secondary tariffs to achieve greater international alignment.
Allies are well aware of the U.S. threat of extraterritorial controls. It would be helpful to see more emphasis on positive inducements to align with the United States. We are doing work at CNAS to develop the idea of economic security agreements, which would provide positive incentives (e.g., fast-track licensing processes) as part of a broader deal to strengthen controls vis-a-vis third party threats. Extraterritorial controls are the hardest to enforce and easiest to circumvent, so should be sparingly used with those constraints in mind.
Recommended Policy Actions
- Led by DOC and DOS and in coordination with NSC, DOE, and NSF, develop, implement, and share information on complementary technology protection measures, including in basic research and higher education, to mitigate risks from strategic adversaries and concerning entities. This work should build on existing efforts underway at DOS and DOC, or, where necessary, involve new diplomatic campaigns.
- Develop a technology diplomacy strategic plan for an AI global alliance to align incentives and policy levers across government to induce key allies to adopt complementary AI protection systems and export controls across the supply chain, led by DOS in coordination with DOC, DOD, and DOE. This plan should aim to ensure that American allies do not supply adversaries with technologies on which the U.S. is seeking to impose export controls.
Following the policy reversal on controlling the export of Nvidia's H20 chips to China, many partners and allies have questions on what exactly is the U.S. strategy on semiconductor export controls. The administration should use this diplomatic strategic plan to provide greater clarity and assurances on how it intends to use export controls in the context of trade and technology competition with China.
- Expand new initiatives for promoting plurilateral controls for the AI tech stack, avoiding the sole reliance on multilateral treaty bodies to accomplish this objective, while also encompassing existing U.S. controls and all future controls to level the playing field between U.S. and allied controls.
This is a welcome call for plurilateral cooperation on export controls, but encouraging other allies to follow the U.S. lead on these issues is going to take some work. The administration should put forward concrete proposals on how to advance international cooperation on dual-use export controls beyond the existing Wassenaar Arrangement.
Ensure that the U.S. Government is at the Forefront of Evaluating National Security Risks in Frontier Models
The most powerful AI systems may pose novel national security risks in the near future in areas such as cyberattacks and the development of chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons, as well as novel security vulnerabilities. Because America currently leads on AI capabilities, the risks present in American frontier models are likely to be a preview for what foreign adversaries will possess in the near future. Understanding the nature of these risks as they emerge is vital for national defense and homeland security.
Recommended Policy Actions
- Evaluate frontier AI systems for national security risks in partnership with frontier AI developers, led by CAISI at DOC in collaboration with other agencies with relevant expertise in CBRNE and cyber risks.
- Led by CAISI at DOC in collaboration with national security agencies, evaluate and assess potential security vulnerabilities and malign foreign influence arising from the use of adversaries’ AI systems in critical infrastructure and elsewhere in the American economy, including the possibility of backdoors and other malicious behavior. These evaluations should include assessments of the capabilities of U.S. and adversary AI systems, the adoption of foreign AI systems, and the state of international AI competition.
- Prioritize the recruitment of leading AI researchers at Federal agencies, including NIST and CAISI within DOC, DOE, DOD, and the IC, to ensure that the Federal government can continue to offer cutting-edge evaluations and analysis of AI systems.
- Build, maintain, and update as necessary national security-related AI evaluations through collaboration between CAISI at DOC, national security agencies, and relevant research institutions.
It was initially unclear whether the Center for AI Standards and Innovation (formerly the US AI Safety Institute) would survive the incoming Trump Administration. However, with the Action plan doubling down on CAISI's central role in the U.S. Government for evaluating national security risks in Frontier Models, CAISI will hopefully be able to engage with industry, initiate research efforts, and attract talent with renewed confidence and credibility going forward.
It is also great to see recognition that evaluating frontier models offers insights beyond direct risks from the models themselves: it helps preview the future security landscape more generally.
Invest in Biosecurity
AI will unlock nearly limitless potential in biology: cures for new diseases, novel industrial use cases, and more. At the same time, it could create new pathways for malicious actors to synthesize harmful pathogens and other biomolecules. The solution to this problem is a multitiered approach designed to screen for malicious actors, along with new tools and infrastructure for more effective screening. As these tools, policies, and enforcement mechanisms mature, it will be essential to work with allies and partners to ensure international adoption.
I'm glad to see that AI-related biosecurity concerns are front-of-mind for the Trump administration. In addition to the efforts outlined here, the administration should also establish robust monitoring of the PRC's exploration of and investment in of AI-enabled biotechnology. PRC strategy documents indicate a clear desire to pursue the AI-biotechnology nexus, which may include the development of next-generation bioweapons that pose significant threats to U.S. national and economic security. The United States needs to identify key indicators of Beijing's progress in this area and build red teams to interrogate how the PRC might weaponize such capabilities.
Recommended Policy Actions
- Require all institutions receiving Federal funding for scientific research to use nucleic acid synthesis tools and synthesis providers that have robust nucleic acid sequence screening and customer verification procedures. Create enforcement mechanisms for this requirement rather than relying on voluntary attestation.
- Led by OSTP, convene government and industry actors to develop a mechanism to facilitate data sharing between nucleic acid synthesis providers to screen for potentially fraudulent or malicious customers.
- Build, maintain, and update as necessary national security-related AI evaluations through collaboration between CAISI at DOC, national security agencies, and relevant research institutions.
In this edition of Noteworthy, researchers from across CNAS dissect the recently released AI Action Plan from the Trump administration.
Experts make in-line comments on the most notable statements on the administration's goal to maintain U.S.-led global technological dominance. Learn more about CNAS's work on artificial intelligence.
The text below has been excerpted for length. To organize interviews with CNAS experts contact [email protected].