April 16, 2026

CNAS Insights | American AI Exports Need a Sovereignty Solution

Earlier this month, the Department of Commerce opened applications for the American AI Exports Program, an ambitious effort to deploy American AI technology, tools, and infrastructure—the full “stack”—around the world. Under the program, U.S. companies can form consortia and compete for federal support to sell bundled AI packages to foreign partners. The program’s rationale is straightforward: Helping American firms build the world’s AI infrastructure will help America shape the global standards, markets, and governance for this era-defining technology. The program’s success, however, may hinge on its ability to address rising global demands for AI sovereignty.

Although AI sovereignty remains ill-defined, it reflects a country’s reasonable desire to develop, deploy, and govern AI technologies on its terms. The Trump administration has sought to shape this still-fluid debate, arguing that “real AI sovereignty means owning and using best in class technology.” In other words, sovereignty means partnership with the United States. This attempted reframe may be clever, but it won’t cut it. Overseas markets are less interested in accepting Washington’s conception of AI sovereignty than in defining their own.

These definitions vary widely and remain high-level. When Honduras launched its sovereign AI platform last November, it invoked “national ownership” and “data sovereignty” as foundational to its “technological independence.” The U.K.’s Minister for AI has defined it in terms of “the ability of the state to have strategic leverage” to ensure access to the technology and its benefits. This February, delegates from over 100 countries adopted a declaration to pursue AI sovereignty, defined as “complete independence” from foreign tech platforms. Washington may be tempted to dismiss the rise of AI sovereignty as rhetorical posturing, but that would be a mistake. Countries may soon land on firmer definitions, and the AI Exports Program needs a minimally acceptable offering for sovereignty-sensitive markets, lest they turn to more accommodating alternatives.

That offering should rest on four commitments: security, resilience, openness, and partnership.

Overseas markets are less interested in accepting Washington’s conception of AI sovereignty than in defining their own.

Security

Securing data is a key driver of the global push for sovereign AI. Countries are caught between Beijing’s cyber aggression on the one hand, and fears of Washington’s extraterritorial access and supposed “kill switches” from U.S. tech companies on the other. Even if these fears are overblown, many governments now seek even greater control over sensitive data. Rhetorical assurances from Washington alone won’t suffice; the AI Exports Program should address these fears by supporting technical, privacy-enhancing solutions.

U.S. cloud companies already offer mature tools to accomplish this goal. One method is confidential computing, which allows governments to train, fine-tune, and query AI models on sensitive datasets without exposing the underlying data, even to the cloud provider itself. As of April 2026, at least two U.S. cloud providers already offer confidential computing services in data centers across Brazil, India, South Africa, and other countries. This structure preserves meaningful control for partner governments, while allowing U.S. firms to provide AI compute, services, and orchestration.

However, confidential computing comes with a premium that can be cost-prohibitive for some markets. Financing through the AI Exports Program could partially offset the difference between standard and confidential cloud deployments for workloads in sensitive domains such as health, finance, and defense.

At the same time, not every workload requires confidential computing, and the program should reward proposals that specify which workloads truly require privacy-enhancing approaches. Such targeted investments cost little compared to the price of losing key markets to more flexible competitors.

Resilience

The irony of past efforts to protect sensitive national data is that they often do just the opposite. Data localization initiatives, which restrict cross-border redundancy or require local data hosting, can leave data vulnerable in a crisis. Distributed cloud services, which are typically offered by U.S. hyperscalers, can absorb shocks that a single-jurisdiction system cannot.

The U.S. government should be clear with partner countries that, while confidential computing and in-country data centers are appropriate for the most sensitive workloads, overdoing it creates fragility. In March 2021, a fire at the French company OVH’s data center in Strasbourg destroyed infrastructure with no off-site backup, along with the data for thousands of customers. Iran’s recent strikes against the United Arab Emirates (UAE) took down two of three Amazon Web Services (AWS) regional data centers, causing severe outages across banking, payments, and enterprise services. While AWS offers cross-border redundancy as a default, the UAE’s data residency rules limited such resiliency-enhancing capabilities. As AI infrastructure increasingly becomes a strategic asset, it becomes both more tempting for the national government to localize it and for foreign adversaries to target it. As the UAE case underscores, the result is dramatically lower resilience.

The United States should not dismiss localization concerns, but reframe the tradeoff: Sovereignty is not control over physical location, but continuity under crisis. U.S. companies can offer resilient AI services that domestic alternatives typically cannot.

A sovereignty strategy that leaves no room for local participation will not be credible to the governments it needs to convince.

Openness

Countries increasingly see open-weight models as the most practical route to avoiding lock-in with any single AI vendor or system. Unlike closed proprietary models, open-weight offerings make their underlying model weights transparent and are free to download and tailor. Many foreign markets now favor open-weight models

because they can be self-hosted, locally adapted, and cheaper to deploy.

Here, the United States faces a genuine dilemma. Open models can boost technological influence by diffusing associated standards and ecosystems, but they weaken direct commercial control and safeguards against risk. The need for such safeguards grows alongside AI capabilities, especially in the cyber domain. Closed models, by contrast, preserve a firm’s revenue and control but risk losing to open-weight Chinese alternatives that are cheaper and more flexible.

Encouraging U.S. companies to offer competitive open-weight models may seem counterintuitive, but open models can help address countries’ sovereignty concerns by allowing them to adapt, audit, and build on U.S. technology without requiring ongoing access to American servers. The United States should continue to offer frontier capabilities through closed models to trusted overseas markets, while maintaining open-weight offerings for markets that demand one, as many U.S. firms already do.

The AI Exports Program should present U.S. open-weight models as a sovereignty-compatible option in bilateral conversations with partner governments and include them on its menu of full-stack packages. Open-weight models offer a sovereignty-compatible alternative for markets willing to trade frontier performance for greater local control and customization. To ensure this approach aligns with U.S. national security interests, the program should only highlight models that have undergone rigorous evaluations to prevent uplift of dangerous capabilities.

Partnership

Countries are not looking to become passive consumers of American AI, or worse, vassals of America’s AI “dominance.” They want to be active participants in the AI transition. The AI Exports Program should take those aspirations seriously, or risk losing to competitors more willing to work with local partners. Describing U.S. AI ambitions in terms of dominating, winning, and locking, as the administration has, only validates fears of technological dependence.

The program should not treat local participation as a concession but a comparative advantage. In many target markets, domestic firms hold the government relationships, regulatory knowledge, and on-the-ground talent that could take U.S. companies years to build independently. Early examples suggest this model works: In Honduras, California-based AI firm MeetKai channeled its sovereign AI infrastructure deployment through Hondutel, the state-owned national telecom, gaining the regulatory standing and government access that no American company could have assembled on its own.

Consortium proposals should replicate MeetKai’s model and identify where domestic firms can serve as service-layer and deployment partners within a U.S.-led stack. A foreign government that sees its own industry represented in an American AI deployment is more likely to welcome it.

The AI Exports Program should formalize this insight. Consortium proposals that incorporate credible local partners should be evaluated more favorably for financial support. A sovereignty strategy that leaves no room for local participation will not be credible to the governments it needs to convince.

Embracing these four commitments—security, resilience, openness, and partnership—will help signal that the program has thought seriously about what foreign governments actually need from AI sovereignty, rather than what Washington simply wants them to accept.

Ruby Scanlon is a research associate with the Technology and National Security Program at the Center for a New American Security.

Vivek Chilukuri is the director and a senior fellow of the Technology and National Security Program at the Center for a New American Security.

View All Reports View All Articles & Multimedia