May 05, 2026
Who Will Make Money on AI?
A Discussion Paper on Aligning Commercial Incentives with National Security Interests
Executive Summary
The private sector is playing a leading role in advancing the frontier of artificial intelligence (AI). As a result, commercial incentives are likely to have a significant impact on how AI capabilities develop and diffuse across markets. Firms’ commercial incentives will influence U.S. national security interests associated with the emergence of powerful AI systems. These interests include enabling beneficial uses of AI while limiting security risks associated with AI misuse, ensuring reliable and controllable AI system behavior in deployment, and maintaining strategic geopolitical advantage in the development and global diffusion of AI.
Yet to date, stakeholders focused on AI national security interests have paid only limited attention to AI companies’ commercialization strategies and market dynamics across the AI stack. This paper seeks to bridge this gap, identifying potential scenarios for the future shape of AI markets and exploring the implications of these scenarios for U.S. national security. Rather than attempting to resolve core debates on the commercialization of AI, the paper seeks to prompt consideration in both the private and public sectors, and among economics and national security expert communities, of how commercial incentives can better align with U.S. national security interests.
Across the AI technology stack (including the infrastructure layer of chips and data centers, the foundation model layer, and the application layer), firms are exploring how to achieve profitability. Variations in companies’ commercialization strategies may result in either more or less demand for AI safety and security from product end users, which may impact how these strategies do or do not reinforce U.S. national security interests. At the sector level, the concentration of market power across layers of the AI stack has implications for the government’s ability to effectively guide the private sector–led development of powerful AI. Mapping out how such commercial dynamics may unfold will help prepare policymakers and other stakeholders to identify possible market interventions to better align commercial incentives with U.S. national security interests.
Key Takeaways
The following key takeaways emerged from the research:
Commercialization Strategies of AI Companies
- AI model and application companies are increasingly focused on generating revenues, targeting four potential end markets: enterprise use cases, consumer use cases, government use cases, or leveraging AI capabilities for internal use cases to indirectly earn revenues. The relative size and appeal of these four markets will shape the trajectory of the AI ecosystem.
- Different end markets have varying levels of demand for AI safety and security based on customer incentives. Government and enterprise markets will tend to be more risk averse, which may better align with U.S. national security interests. Consumer end markets may feature significant information asymmetries and risk externalities. Internal use cases may hinder transparency and create misuse and misalignment risks that are difficult for governments to manage.
- Compute policy—both restricting trade in advanced semiconductors through export control programs and supporting such trade through export promotion programs—will continue to play an important role in advancing U.S. national security interests on AI. U.S. strategy will need to balance the objectives of maintaining a preponderance of advanced compute in the United States with encouraging foreign counterparts to build and deploy their AI systems on top of U.S. infrastructure.
Market Structure Across the AI Sector
- The AI market has meaningful concentration at the infrastructure and model layers of the AI stack, which may be exacerbated by vertical integration and circular financing deals among key players.
- There is significant uncertainty on how the foundation models market will evolve, including how market demand may be segmented between open-weight and closed-weight models and the extent to which “good enough” commoditized AI will capture a substantial share of the total market.
- Shifts toward a monopolistic market structure at any layer of the AI stack may erode the government’s power to protect U.S. national security interests if any one AI provider becomes too big to fail. At the same time, a more decentralized market may present different challenges associated with difficulties in controlling access to powerful AI and coordinating on best practices for AI safety.
Recommendations
Policymakers face a formidable challenge in trying to shape AI markets to align commercial incentives and national security interests. Given the rapid pace of technological development, rigid rules are unlikely to be effective. Instead, a flexible and layered policy framework that prioritizes shaping market incentives so that economic actors internalize the costs and benefits of AI safety and security may be a more durable approach. Such a framework should:
1. Leverage U.S. infrastructure and compute capabilities to shape the geographic distribution of AI. U.S. leadership in AI infrastructure and compute provides the United States with substantial ability to shape the geographic distribution of AI capabilities and adoption, as well as an opportunity to make access to U.S. technologies contingent upon alignment with U.S. national security interests. U.S. policymakers should continue to facilitate the build-out of AI infrastructure in the United States, use export controls and export promotion policies to limit China’s ability to develop and deploy frontier AI while encouraging other foreign counterparts to buy U.S. AI products where feasible, and embed basic AI safety and security conditions into AI infrastructure approvals.
2. Use transparency, liability, insurance, and procurement policies to align market demand for AI safety and security with commercial incentives. Policymakers should prioritize strengthening the transparency of AI companies, which will better enable investor and customer demand for AI safety to influence AI companies’ behaviors. Policymakers should also mandate disclosures to the government for certain high-severity misuse and misalignment risks, particularly for companies’ internal use of models and applications which may not otherwise garner public scrutiny. Clarifying the legal liability for AI misuse and misalignment incidents up front through legislation, rather than litigating through the courts, could also help ensure that the costs of misuse and misalignment do not become national security externalities in AI markets. Supporting the emergence of AI insurance markets can also incentivize more responsible behavior. Finally, governments should leverage their role as customers of AI products to send strong demand signals for safe and secure AI.
3. Encourage safe and competitive AI markets in the United States. Policymakers should provide antitrust guidance reflecting the economic and national security importance of AI markets, defining acceptable levels of concentration across the AI stack, and clarifying under what conditions AI companies can coordinate and share information to advance their shared AI safety priorities. The government should also assess national security risks associated with the use of Chinese models in the U.S. AI market and adopt risk-based approaches to limit the access of Chinese firms where justified. Recognizing considerable demand for open-weight models and the downsides of ceding this market to Chinese competitors, policymakers should work with researchers and the private sector to foster a healthy ecosystem of safe and secure U.S. open-weight models.
Introduction
The development of breakthrough dual-use technologies has historically involved cooperation and interaction between academia, the government, and the private sector. Researchers in academia often provide the initial seeds for new innovations, governments can marshal resources and direct research programs toward national security priorities, and as technologies mature and develop, the private sector can scale new capabilities, operating within regulatory frameworks that account for dual-use risks identified by the government. For example, the initial ideas for nuclear energy emerged from European academics. The U.S. government then launched the massive Manhattan Project to develop nuclear weapons capabilities in the context of the Second World War. A decade later, the Atomic Energy Act of 1954 paved the way for the private sector to deploy civilian nuclear power plants.1
The development of artificial intelligence (AI), however, has followed a distinct trajectory. The private sector is playing a leading role in advancing the frontier of this new dual-use technology even in its early stages, as novel capabilities and associated risks are still being discovered. Because AI is developing within commercial firms rather than government programs, market forces are likely to play a significant role in shaping how these capabilities develop (as companies direct research resources to align with commercial incentives) and diffuse (as companies choose which markets to prioritize). There is an urgent need to better understand how market structures and commercial pressures will influence the evolution of AI technologies, and what these dynamics mean for AI risks and U.S. national security interests.
Today, there are several wide-ranging debates on future scenarios for AI commercialization and market development. The unique characteristics of AI as a product, including significant uncertainties on what product fits will ultimately be most important for a powerful general-purpose technology, make it difficult to draw clear historical lessons from other industries. Will companies developing frontier AI models identify sustainable business models? Will enterprise AI users find productive use cases to justify rapid uptake of AI tools? What layers of the AI stack will be most profitable, and what layers may become commoditized? Will today’s dominant incumbent companies across the AI stack maintain their positions, or will new upstarts encroach on their markets?
Because AI is developing within commercial firms rather than government programs, market forces are likely to play a significant role in shaping how these capabilities develop and diffuse.
The starting point of this paper is that the answers to these and related questions will have important implications for U.S. national security interests. To date, however, experts focused on AI and national security have tended to pay more attention to technical AI developments, such as the trajectories and timelines to reach different AI capability benchmarks, than to companies’ commercialization strategies and market incentives. To the extent that commercial dynamics are considered, it has been primarily around how competition to release new models may give an incentive to cut corners on safety.2 While important, this is only one channel through which AI commercialization may impact national security, leading to an important gap in understanding. This paper does not seek to provide definitive answers to the big outstanding questions on the evolution of the market for AI, but rather to better understand the contours of such debates to identify what is at stake for U.S. national security and assess how policymakers and private actors might better align commercial incentives with national security interests.
The AI Economy
AI has evolved from an esoteric research topic to an emergent general-purpose technology that could have transformative effects on the U.S. and global economies. Researchers are beginning to identify the impact of AI in the latest aggregate economic data on productivity and employment in the United States, although such analyses remain contested.3 Expectations of the future economic impact of powerful AI vary widely. Some leaders at frontier labs have suggested that AI will drive an economic revolution in the near term, with growth in gross domestic product accelerating to 10–20 percent a year.4 Economists studying AI tend to be more cautious, highlighting that even as AI capabilities improve exponentially, economic growth will still be limited by physical constraints in the real world—including the ability to scale chips and data centers—and that historical precedents suggest technology diffusion and adoption is a slow and staggered process.5 Some economists and AI experts warn powerful AI could spark mass unemployment, as AI becomes a cheaper and more capable substitute for nearly all knowledge workers.6 Other economists, however, are more sanguine, again often pointing to the history of earlier labor-saving innovations, which served as a complement rather than a substitute for labor and induced greater aggregate demand.7
Meanwhile, the American public is increasingly anxious about AI. One recent poll found that 57 percent of Americans believed the risks of AI outweigh its benefits, with only 34 percent holding the opposite opinion.12 Seventy percent of Americans think advancements in AI will lead to a decrease in job opportunities, while only 7 percent expect an increase.13 Such perceptions are sparking a backlash against the prospect of transformative AI, epitomized in the fights state and local governments are launching against new data center construction.14 Whether these public pressures ultimately translate into policy shifts to restrict AI development and deployment, however, remains to be seen; Americans have also had predominantly negative views of social media companies for several years, but this has not resulted in substantial regulatory changes to date.15
Against this backdrop, companies across the AI stack are seeking to establish or cement profitable business strategies in rapidly evolving commercial AI markets. Individual companies’ commercial decisions are shaped by broader macroeconomic trends in the AI market, and likewise the aggregate economic impact of AI will be determined by the interaction of individual companies’ decisions. Certain segments of the AI tech stack, such as semiconductor companies, have more clarity on commercialization and have already reaped substantial revenues from the AI boom. However, fundamental questions remain about the path to commercialization and profitability for AI models and applications. A central question will be to what extent AI companies are incentivized to design their products for enterprise, consumer, government, or internal (e.g., indirectly monetizing AI capabilities by better targeting ads) uses. How companies balance between these market segments, and which grow faster than others, will drive commercial strategy decisions that will shape incentives for investing in AI safety, potentially leading to knock-on effects for U.S. national security interests.
Companies’ commercialization strategies will also impact market structure and concentration across the key layers of the AI stack: AI infrastructure, including chips and data centers; foundation models; and AI applications. Throughout the AI stack, companies are seeking to establish market positions that provide a moat to shield them from competition while also worrying about overconcentration among both their suppliers and their customers. These evolving market dynamics will influence what levers governments might use to shape AI development and deployment to ensure its consistency with U.S. national security interests.
Powerful AI capabilities may bring significant benefits for, but may also have serious repercussions on, U.S. national security interests. This paper considers three categories of national security interests: (1) how to enable beneficial AI uses while minimizing risks associated with the misuse of powerful AI, (2) how to ensure reliable and controllable AI system behavior, aligned with the intentions and values of their developers and users, and (3) how to maintain the United States’ strategic geopolitical advantage in the development and global diffusion of AI.
Realizing the benefits of powerful AI will require broad adoption and diffusion of AI capabilities. But expanding the use of AI may also increase security risks associated with the misuse of AI, namely the ability of malicious actors to deliberately employ powerful AI for harmful purposes. As AI systems become more capable, they can lower the barriers to conducting cyberattacks, developing biological or chemical weapons, generating disinformation at scale, or supporting adversary military operations. The April 2026 announcement by Anthropic that its Mythos model had discovered a dangerous number of zero-day cybersecurity exploits, and its decision to withhold a public release of the model, underscores that these risks are no longer hypothetical.16 AI developers build safeguards into their models to prevent misuse, but these guardrails are imperfect, susceptible to circumvention through jailbreaking, and require continual investment to maintain as model capabilities advance. They are also discretionary, left up to individual firm decisions. Moreover, the growing availability of open-weight models, where model parameters are publicly released, means that safety guardrails can be removed by anyone with sufficient technical capability.
The United States also has a national security interest in
ensuring AI systems behave in ways intended by their developers or users and avoiding AI misalignment. At one end of the spectrum, misalignment includes relatively mundane technical failures, such as an AI system that misinterprets instructions, produces unreliable outputs, or acts unpredictably in novel situations. At the other end, it encompasses more severe scenarios in which highly autonomous AI systems pursue objectives that diverge from human intentions in ways that are difficult to detect or correct. At an extreme, misalignment of powerful AI systems may create large-scale risks to human safety. AI developers use approaches such as reinforcement learning from human feedback and related techniques to train models to better align with what users want, but “interpretability”—understanding why a model acts the way it does—remains a significant challenge, particularly as systems grow more capable and are deployed with greater autonomy.17
The United States also has a national security interest in ensuring AI systems behave in ways intended by their developers or users and avoiding AI misalignment.
Finally, the United States also has a set of geopolitical national security interests associated with how AI development and diffusion may shift the balance of power between states. The United States has a strategic interest in maintaining a leading position in AI relative to competitors, particularly China, given AI’s potential military and geoeconomic implications. AI systems are already being deployed in U.S. military operations, and a dominant AI capability could substantially alter the balance of power between the United States and its adversaries. Beyond direct military applications, if U.S. competitors gain a large share of the global AI market, this could create systemic dependencies and provide them with a powerful tool for economic leverage over other countries.
Importantly, these three interests are not independent and may at times be in tension. For example, the geopolitical interest in maintaining more advanced AI capabilities than strategic competitors may lead to dangerous AI race dynamics, where concerns about misuse and misalignment are de-prioritized to develop powerful AI capabilities as quickly as possible. The reverse is also true: Pausing U.S. AI progress until misuse and misalignment risks are fully resolved could compromise the country’s competitive position relative to adversaries who do not apply the same constraints. The overarching national security interest for the United States is advancing each of these three objectives in tandem, without sacrificing any one for the sake of another.
About This Paper
This paper does not seek to resolve any particular debate about how AI markets will evolve, nor to make direct causal arguments that certain market structures will be better or worse for U.S. national security interests. In fact, in most scenarios, there will likely be a mix of competing incentives as firms experiment with different ways to achieve profitability. By charting a range of plausible commercial scenarios and then mapping out how these scenarios may implicate U.S. national security interests in a variety of ways, the goal of the paper is to inform decision-makers in the private and public sector as they seek to align commercial incentives with national security interests. A richer understanding of these market dynamics will help U.S. policymakers and private sector actors seeking to shape these markets to appropriately calibrate their interventions to avoid unintended consequences.
The analysis and recommendations in this paper are informed by extensive desk research by the authors, as well as dozens of interviews conducted with industry, civil society, and government representatives in the San Francisco Bay Area, Washington, D.C., and the United Arab Emirates. Interview subjects included companies making AI chips, building AI data centers, and developing foundation models and AI applications; enterprises integrating AI into their business operations; venture capital firms and other firms investing in AI; AI safety experts; academic economists; and government officials regulating AI and developing strategies to integrate AI into their economies. Interviews were conducted confidentially to allow participants to share candid views.
The next section examines potential variations in AI companies’ commercialization strategies and assesses to what extent these commercial incentives align with U.S. national security interests. The subsequent section zooms out from the company level to the sector level, assessing how market structure and concentration may evolve across layers of the AI stack and the national security implications of these trends. The final section outlines a high-level framework for strengthening policy interventions to better align commercial incentives with national security interests.
Download the Full Report
- David Fischer, History of the International Atomic Energy Agency: The First Forty Years (International Atomic Energy Agency, 1997), https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1032_web.pdf; U.S. Department of Energy, The History of Nuclear Energy (U.S. Department of Energy, accessed April 9, 2026), https://www.energy.gov/ne/articles/history-nuclear-energy. ↩
- See, for instance, Yoshua Bengio et al., International AI Safety Report 2026 (AI Security Institute, 2026), 97, https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026, which notes that “Due to competitive pressures, AI companies may face trade-offs between faster product releases and investments in risk reduction efforts.” ↩
- For an overview of the debate on productivity, see Alex Imas, “What Is the Impact of AI on Productivity?” Ghosts of Electricity (Substack), January 29, 2026, https://aleximas.substack.com/p/what-is-the-impact-of-ai-on-productivity and Erik Brynjolfsson, “The AI Productivity Take-off is Finally Visible,” February 15, 2026, Financial Times, https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419dc5. For an overview of the debate on labor market impacts, see Jed Kolko, “Research on AI and the Labor Market is Still in the First Inning,” Peterson Institute for International Economics Realtime Economics (blog), March 20, 2026, https://www.piie.com/blogs/realtime-economics/2026/research-ai-and-labor-market-still-first-inning. ↩
- Dario Amodei, “Machines of Loving Grace: How AI Could Transform the World for the Better,” Dario Amodei (blog), October 2024, https://darioamodei.com/essay/machines-of-loving-grace. ↩
- Ezra Karger et al., “Forecasting the Economic Effects of AI,” Working paper (Forecasting Research Institute, March 2026), https://forecastingresearch.org/economic-effects-of-ai; Thomas Cunningham, “Forecasts of AI & Economic Growth,” (blog), November 9, 2025, https://tecunningham.github.io/posts/2025-10-19-forecasts-of-AI-growth.html; and Charles I. Jones, “A.I. and Our Economic Future,” Working Paper No. 34779 (NBER, January 2026), https://www.nber.org/papers/w34779. ↩
- Anton Korinek and Donghyun Suh, “Scenarios for the Transition to AGI,” Working Paper No. 32255 (NBER, March 2024), https://www.nber.org/papers/w32255; Matthew Barnett, “AGI Could Drive Wages Below Subsistence Level,” Epoch AI Gradient Updates (blog), January 24, 2025, https://epoch.ai/gradient-updates/agi-could-drive-wages-below-subsistence-level. ↩
- Erik Brynjolfsson, Danielle Li, and Lindsey Raymond, “Generative AI at Work,” The Quarterly Journal of Economics 140, no. 2 (2025): 889–942, https://doi.org/10.1093/qje/qjae044; Martin Neil Baily, Erik Brynjolfsson, and Anton Korinek, “Machines of Mind: The Case for an AI-Powered Productivity Boom,” Brookings Institution, May 10, 2023, https://www.brookings.edu/articles/machines-of-mind-the-case-for-an-ai-powered-productivity-boom/. ↩
- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). ↩
- Holden Karnofsky, “Some Background on Our Views Regarding Advanced Artificial Intelligence,” Coefficient Giving, May 6, 2016, https://coefficientgiving.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/. ↩
- Arvind Narayanan and Sayash Kapoor, “AGI is Not a Milestone,” AI as Normal Technology, May 1, 2025, https://www.normaltech.ai/p/agi-is-not-a-milestone. ↩
- Multiple interviews with employees at AI labs, January 2026. Interviews were conducted in confidentiality, and the names of the interviewees are withheld by mutual agreement. It is worth noting, however, that there is a range of views within the labs and no single consensus on timelines. See also Amodei, “Machines of Loving Grace” ; Sam Altman, “The Gentle Singularity,” Sam Altman (blog), June 10, 2025, https://blog.samaltman.com/the-gentle-singularity. ↩
- Allan Smith, “Poll: Majority of Voters Say Risks of AI Outweigh Benefits,” NBC News, March 10, 2026, https://www.nbcnews.com/politics/politics-news/poll-majority-voters-say-risks-ai-outweigh-benefits-rcna262196. ↩
- Quinnipiac University Poll, “The Age of Artificial Intelligence: Americans’ AI Use Increases While Views on It Sour, Quinnipiac University Poll on AI Finds; 7 in 10 Think AI Will Cut Jobs with Gen Z the Most Pessimistic,” press release, March 30, 2026, https://poll.qu.edu/poll-release?releaseid=3955. ↩
- Lydia DePillis, “Local Opposition Is Slowing A.I. Data Centers: Wall Street Has Noticed.” The New York Times, March 26, 2026, https://www.nytimes.com/2026/03/26/business/economy/ai-data-centers-construction-local-opposition.html. ↩
- Monica Anderson, Americans’ Views of Technology Companies (Pew Research Center, April 29, 2024), https://www.pewresearch.org/internet/2024/04/29/americans-views-of-technology-companies-2/. ↩
- Huo Jingnan, “How AI is Getting Better at Finding Security Holes,” National Public Radio, April 11, 2026, https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-project-glasswing-ai-cybersecurity-mythos-preview. ↩
- Tim G. J. Rudner and Helen Toner, Key Concepts in AI Safety: Interpretability in Machine Learning (Center for Security and Emerging Technology, March 2021), https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/. ↩
More from CNAS
-
USTR Hearing on Section 301 Investigations into Structural Excess Capacity
On May 5, 2026 Emily Kilcrease, Senior Fellow and Director of the Energy, Economics, and Security Program at CNAS testified at the Office of the United States Trade Representa...
By Emily Kilcrease
-
The UAE Wants a Dollar Lifeline
Rachel Ziemba joins The Indicator to discuss the relationship between the U.S. and the UAE in terms of investment. Listen to the full interview on The Indicator | Planet Money...
By Rachel Ziemba
-
President Trump Wants to Be Able to Sell the Iran War as a Win
Rachel Ziemba, founder of Ziemba Insights and adjunct senior fellow at the Center for a New American Security, says that investors should look beyond the ending of the Iran wa...
By Rachel Ziemba
-
Public Comments Submitted in Response to USTR Initiation of Section 301 Investigations
Executive Summary In its request for comment, the Office of the U.S. Trade Representative (USTR) solicits comments on and recommendations “regarding the acts, policies, and p...
By Emily Kilcrease & Geoffrey Gertz
