May 07, 2026

American AI Companies Can’t Get Enough Chips

Implications for U.S. Policy

Executive Summary

In 2026, artificial intelligence (AI) chip production has become a binding constraint on the pace of the AI compute buildout. Demand for computing power to train and deploy advanced AI models continues to grow exponentially, outpacing many chip manufacturers’ forecasts. Supply chains for AI chips and key inputs cannot scale rapidly enough to meet demand, as it takes years to build additional manufacturing capacity. Given these constraints on AI chip supply, the United States has both greater leverage and greater reason to ensure every chip is put to its highest-value use, giving rise to five key policy implications:

  • As rising chip costs risk pricing out researchers and scientists, Congress should significantly increase funding for the National AI Research Resource to ensure compute access for public innovation keeps pace.
  • Every chip exported to competing countries such as China is one fewer available to U.S. companies and democratic allies. Exports to China expand the compute accessible to the Chinese Communist Party (CCP) that can be used against American interests.
  • Aggressively exporting AI chips to allies and democratic partners remains critical to enable U.S. AI companies to procure sufficient data center capacity and to ensure long-term American AI leadership.
  • Chip shortages add urgency to countering chip smuggling. Location verification and stronger controls on high-bandwidth memory can help ensure scarce supplies are not diverted to unauthorized end users.
  • New initiatives like Pax Silica have a vital role to play in coordinating efforts between allies, helping build out resilient semiconductor supply chains.

Introduction

Artificial intelligence (AI) chips are critical for AI progress, and American AI companies can’t get enough of them. They provide the computing power—that is, the “compute”—needed to train, deploy, and improve AI models. The more computing power AI companies amass, the better AI models they can produce. Since the release of ChatGPT in late 2022, spending on AI chips and data centers has grown exponentially. Microsoft, Alphabet, Amazon, Meta, and Oracle plan to spend almost $700 billion on capital expenditures in 2026, the majority for AI infrastructure. As of publication, AI companies have consistently said that they want to expand their compute faster than supply chains allow (see the appendix). However, AI chip manufacturing is becoming a binding constraint on the pace of the AI compute buildout.

This was not always the case. Building compute requires several inputs, including the chips themselves, data centers to house them, and power for the data centers. In 2024 and 2025, the most common constraint on AI scaling was power for data centers. However, in 2026, the tightest constraint that AI companies face in procuring additional compute is shifting to the production of the AI chips themselves. As Sam Altman, chief executive officer (CEO) of OpenAI, put it: “It [the bottleneck] goes back and forth. Right now, again, it’s chips.” Chip supply has not kept pace with exponentially growing demand. Since building brand-new chip manufacturing capacity takes years, the bottleneck in chip production will likely be the rate-limiting factor on the AI compute buildout for at least the next year.

AI chip manufacturing is becoming a binding constraint on the pace of the AI compute buildout.

This bottleneck has several implications for U.S. AI policy, with scarcity increasingly making AI chips a strategic resource. Every chip sent to competitors such as China is one less chip available to American AI companies, raising prices and slowing America’s AI progress. This is even the case for older AI chips, like NVIDIA’s H200, as it uses the same limited manufacturing capacity needed for more advanced AI chips. It also means one less chip available for the America AI Exports Program, which depends on sufficient supply to cement the U.S. tech stack in strategic third markets. Finally, it underscores the importance of frameworks like Pax Silica to not only tackle the upstream challenges of critical minerals but also build a more resilient, long-term allied supply of AI chips.

American AI Companies Can’t Get Enough Chips

The compute used to build and run AI systems has been growing exponentially, driven by continued growth in compute used for training and enhancing new models, conducting research and development (R&D) to improve the next generation, and deploying models at scale. Although AI models are becoming more compute efficient, this has not reduced aggregate demand for compute. Surging demand from AI scaling and adoption has overwhelmed efficiency gains.

Compute constraints force AI companies to make difficult tradeoffs between tightening usage limits on customers, raising prices to suppress demand, cutting R&D investment, training smaller models, and serving a lower-quality product. Each option means less revenue, slower capability progress, and a weakened competitive position. These tradeoffs are already playing out. Recently, Anthropic introduced stricter rate limits on Claude during peak hours to manage demand. Google CEO Sundar Pichai claimed Google is “supply constrained even as we’ve been ramping up our capacity.” The leaders of major AI companies and their suppliers have consistently echoed this assessment (see the appendix).

These compute constraints are evident in AI chip rental prices. Historically, the cost of computing power has dropped exponentially over time, as more efficient hardware enters the market. However, analyses by Silicon Data and SemiAnalysis indicate that the rental price of the H100, a 2023 NVIDIA chip, is higher today than it was several years ago. In other words, chip demand has more than offset efficiency-related price reductions.

Source: Center for a New American Security (CNAS), using data from Silicon Data and SemiAnalysis

The tightest constraint for a given company will depend on its existing contracts and assets. However, as companies forward plan to scale up compute, their leaders consistently point to a specific bottleneck limiting the pace of compute buildout: manufacturing capacity for AI chips. According to an executive at Broadcom, which designs Google’s AI chips, “We are seeing that TSMC is hitting [production-capacity] limits. . . . They will be increasing the capacity to 2027, but that has become a bottleneck.” Taiwan Semiconductor Manufacturing Company’s (TSMC’s) CEO, C. C. Wei, echoed this claim: “The bottleneck is TSMC’s wafer supply, not the power consumption.”

What Drives Chip Shortages

Chip manufacturers are reluctant to aggressively expand production for several reasons. First, long lead times, high capital costs, and boom-and-bust cycles are intrinsic to the business. Additionally, though, the memory of getting burned by overbuilding in response to inflated demand following the COVID-19 pandemic is still fresh. Automotive and consumer electronics manufacturers canceled orders early in the pandemic, anticipating a downturn. When the economy rebounded, they couldn’t secure adequate supply, as it had already been reallocated. This led to cars worth tens of thousands of dollars sitting unfinished on factory lots for want of chips costing only a few dollars, costing the automotive industry an estimated $210 billion in 2021. When manufacturers expanded capacity in response, they overshot actual demand as customers had been double-booking orders to ensure they got enough supply, leaving manufacturers with excess capacity and significant losses.

The computer memory industry is especially susceptible to boom-and-bust cycles. Demand for memory can shift quickly, but new manufacturing capacity takes billions of dollars and many years to bring online. This timing mismatch can produce dramatic swings: High demand drives up prices and incentivizes investment, but by the time new capacity comes online, the industry has often overbuilt. These cycles have bankrupted company after company, consolidating the market from over 20 significant producers in the 1990s to just three main companies today: Samsung, SK Hynix, and Micron. These companies have seen demand spikes before and are wary of repeating mistakes that killed their competitors. These concerns make them hesitant to invest aggressively in response to surging demand from AI companies.

The computer memory industry is especially susceptible to boom-and-bust cycles.

TSMC, the Taiwan-based manufacturer that fabricates roughly 90 percent of the world’s most advanced chips, faces similar market dynamics. The company’s CEO has acknowledged this directly: “You essentially try to ask whether the AI demand is real or not. I’m also very nervous about it. . . . If we did not do it carefully, that will be a big disaster to TSMC for sure. . . . I want to make sure that my customers’ demands are real.” To hedge against the risk of evaporating AI demand, TSMC is also reserving capacity for customers with a long track record of reliable demand, such as Apple, even if that means accepting lower prices.

This caution has directly contributed to the current AI chip bottleneck. Despite the surge in spending on AI chip investments after the release of ChatGPT, capital expenditures from TSMC and major memory manufacturers were lower in 2023 and 2024 than in 2022 (Figure 2). While they are investing more aggressively now, it takes several years to build new manufacturing capacity. The concentration in the semiconductor industry means that there are not viable alternatives if demand spikes. Therefore, AI chip production is effectively capped by how much a few, cautious companies have invested in the past few years.

Source: CNAS, using data from public financial filings, Epoch AI, and forecasts for 2026 CapEx

AI-driven demand, meanwhile, has continued to skyrocket. As of April 2026, Anthropic’s annualized revenue has surged to $30 billion, up from $9 billion just four months earlier. This has caught chip manufacturing companies flat-footed. Reportedly, NVIDIA and Broadcom requested additional manufacturing capacity from TSMC, only to be turned down. Google has reportedly been unable to increase its AI chip production to meet its 2026 targets because it did not secure enough manufacturing capacity. These chip manufacturers are now investing more aggressively in additional capacity, but given the time required to bring new capacity online, the availability of AI chips will likely continue to be the tightest constraint on AI scaling in the near term.

TSMC’s first Arizona fabrication facility (fab) is now producing four-nanometer (nm) chips, with five more fabs planned through a $165 billion investment. As the demand signals from AI companies work their way through the supply chain, other companies are making bets to increase chip supply, including xAI CEO Elon Musk’s ambitious goal of building his own fabs. Market forces will eventually boost supply to meet the demand, but that will take years to bear fruit. Tightness in the chip supply chain will remain through at least the end of 2026.

Specific Bottlenecks Within AI Chip Production

AI chips consist of several distinct components that share many manufacturing components with each other and with consumer hardware, such as smartphones. Key components include logic dies (which perform the computations) and memory, which is packaged with the dies on the same AI chip.

Figure 3 | Memory and Logic Wafer Production for AI Chips Is Highly Concentrated

Memory and logic wafer production for AI chips depends on the fabrication facilities (fabs) of a small number of companies.

Source: CNAS

The production of logic dies and memory is particularly tight. Both rely on semiconductor fabs with highly sophisticated “clean rooms” to manufacture chips with ultraprecise lithography machines. These environments must be kept more pristine than a hospital operating room, as a single speck of dust can destroy chips during production. Building new clean rooms takes years, which is a main reason that production of logic and memory cannot respond quickly to surges in demand.

Logic Wafers

Finished logic wafers—the processed silicon discs from which individual logic dies are cut—are currently one of the tightest constraints on AI chip supply. Logic dies provide the core processing component of an AI chip, but logic wafers are also used to make networking chips, central processing units (CPUs), and parts of memory chips.

Silicon wafers are used to manufacture logic and memory chips.

PonyWang via Getty Images

NVIDIA, AMD, and other AI chip designers rely on TSMC’s world-leading, advanced node processes to fabricate their logic chips. But TSMC’s manufacturing capacity is finite in the short term and increasingly oversubscribed, given the surging demand for AI compute. As a result, even TSMC’s largest customers are short of supply.

In November 2025, TSMC’s CEO stated that the company’s advanced process capacity was “not enough, not enough, still not enough” and that demand was running roughly three times ahead of what the company could produce. He also reportedly joked about wearing a shirt that read “no more wafers” to emphasize the severity of the shortage. TSMC’s production capacity for 3 nm chips stood at around 70 percent utilization in early 2025, but it has been near and even above 100 percent since late 2025, as the company pushes machines above their planned capacity, including by delaying maintenance. The company’s 3 nm node is especially constrained because it produces today’s most advanced AI chips, including NVIDIA’s Vera Rubin and Google’s TPUv7. The company’s 2 nm fabrication capacity is booked through 2028. TSMC cannot meaningfully ramp up advanced wafer manufacturing capacity in the near term because constructing additional fabs takes two to four years and sometimes longer. New capacity currently under construction is also unlikely to resolve the shortage. TSMC’s planned Arizona Fab 4 is already fully booked and the ground hasn’t even been broken yet.

Source: CNAS adaptation from SemiAnalysis

Other industries, such as smartphones, have historically consumed significantly more advanced logic wafer capacity than AI chips, so AI companies increased their chip production by simply outbidding those industries for production slots. However, with AI compute production growing at over three times per year, it is beginning to run up against TSMC’s total manufacturing capacity for certain manufacturing lines. Flagship customers such as Apple and Broadcom have explicitly stated that they are constrained by the available supply of TSMC chips, while the CEO of TSMC stated on the April 2026 earnings call that it will not be until 2027 that “supply can meet demand.”

Memory

Memory is another bottleneck. Smartphones and personal computers use dynamic random-access memory (DRAM) as their standard form of working memory. AI data centers use both this conventional DRAM and a specialized variant called high-bandwidth memory (HBM), which stacks multiple DRAM dies vertically to deliver the massive bandwidth AI chips require. When AI models are deployed, performance is shaped by how quickly the chip can move data through its memory, which is the key functionality HBM provides. As more computing power is devoted to running models at scale, HBM has grown increasingly important and scarce.

For a time, this wasn’t a problem. DRAM is a large market that predated the AI boom, so AI chipmakers could get more memory by simply buying a larger share of existing production. However, similarly to the case with advanced logic wafers, that runway has narrowed as AI memory demand has skyrocketed. Aggregate HBM bandwidth has increased by over four times per year, and AI now accounts for the majority of DRAM demand (Figure 5).

Source: Adapted from SemiAnalysis

Growing demand for HBM has stressed the production capacity of the entire memory industry, causing some DRAM prices to increase over 600 percent in 2025. In addition to DRAM (used for working memory), demand for long-term memory for AI servers drove up the price of NAND Flash (the memory used for long-term storage) over 300 percent in 2025. Because DRAM and NAND are also essential for consumer electronics like smartphones and laptops, these industries have been hit hard by the price shock. Industry analysts expect the memory shortages and price increases to shrink the global PC and smartphone markets by 11 and 13 percent, respectively, in 2026. In response to the tightness in DRAM memory supply, Meta recently stated that they would extend the planned lifetime of some of their older data centers, stating: “New server procurement cannot keep pace with demand growth in the near term.”

While certain buyers can get more memory by outbidding others, overall memory production is finite in the short term. The AI compute buildout cannot grow faster than memory fabrication capacity allows. For both memory and logic wafers, production capacity is also often locked into longer-term contracts, especially relative to the pace of the AI industry, meaning that additional capacity is unavailable at any price.

Several factors compound the current memory shortage. First, making HBM requires three to four times as many memory wafers per gigabyte of memory as standard DRAM does. When chipmakers expand HBM production, therefore, it comes at a steep cost to conventional DRAM supply. Second, the amount of HBM packed into each AI chip is rising exponentially with each generation.

Third, memory manufacturers cut investments in new fabrication facilities in 2023 and 2024, when they were losing money on memory (Figure 2). They are increasing investment in 2026, but it will take years for new memory fabs to come online. Finally, improvements in memory density have stalled over the past decade, meaning the industry can no longer count on the efficiency improvements that historically allowed it to dramatically grow output without building as many new fabs.

These factors have combined to cause what an executive at Micron has described as “the most significant disconnect between demand and supply in terms of magnitude as well as time horizon that we’ve experienced in my 25 years in the industry.” Despite investments in additional manufacturing capacity, Micron CEO Sanjay Mehrotra has stated that “aggregate industry supply will remain substantially short of the demand for the foreseeable future,” and SK Hynix CEO Kwak Noh-jung has stated that the “current shortage could continue until 2030.”

Demand for AI chips is straining manufacturing capacity for both advanced logic and memory, limiting the rate of America’s AI buildout. Given the inelastic nature of these supply chains in the short term, these constraints will persist through 2026.

Other Tightness in the Chip Supply Chain

Other constraints to the AI buildout could soon emerge as the technology continues to scale and evolve. For example, as AI handles more agentic tasks that require CPUs rather than graphics processing units (GPUs), such as web browsing, shortages of CPUs are likely as the market adapts.

Advanced packaging, the process of integrating compute dies and HBM stacks onto a single substrate, was a binding constraint on AI chip production in 2023. The dominant advanced packaging technology is TSMC’s Chip-on-Wafer-on-Substrate (CoWoS), which is used in nearly every major AI chip. Since 2023, CoWoS capacity has expanded significantly, with SemiAnalysis characterizing it as “tight but easing” as constraints on memory and logic wafers have become relatively more severe. With that said, packaging constraints have yet to fully disappear. CoWoS capacity was reportedly the bottleneck that forced industry analysts to revise their forecasts of Google’s 2026 AI chip production plans from four million to three million chips.

At the company level, the most binding constraint is a function of existing contracts, supply agreements, and capital planning. For example, in late 2025, Microsoft continued to highlight energy as its primary concern, while in early 2026 OpenAI executives stated that their binding constraint had shifted from power to chips. However, these constraints also interact dynamically. If a company expects to be power constrained, it will not invest as aggressively in securing chips that it cannot run, which in turn weakens demand signals for chip manufacturers to expand capacity. But some industry analysts expect that, in aggregate, chip constraints, rather than power constraints, will be the binding constraint on AI in 2026 and beyond.

Taken together, these supply chain constraints make AI chip production a significant bottleneck on AI progress for 2026 and potentially for years to come.

Policy Implications

Given the constraints on AI chip supply, the United States has both greater leverage and greater reason to ensure every chip is put to its highest-value use.

Rising Costs May Threaten Beneficial Uses of Compute

Increased demand and constrained supply for AI chips threaten to keep AI compute prices elevated for the near future. Well-funded AI companies can likely absorb the premium, but researchers and academics risk being priced out of access to the computing power needed for foundational research and innovation. To mitigate this risk, the U.S. government should increase funding for compute subsidies through the National AI Research Resource initiative (NAIRR). NAIRR seeks to provide American researchers with access to computing power and other resources for AI research and AI-enabled scientific discovery. Congress has approved $30 million to continue the program through fiscal year 2026—a year in which hyperscalers are projected to spend around $690 billion in AI capital investments. This year, for every dollar allocated to NAIRR, the private sector is investing roughly $23,000 in AI. To offset rising compute costs and ensure access for U.S. researchers, students, and small businesses, Congress should increase NAIRR’s funding and ensure AI-enabled science and innovation continue apace.

Exporting Chips to Autocracies Such as China Undercuts U.S. and Allied Chip Access

The bottlenecks connected to AI chip supply means that current chip allocation is zero-sum. Every chip exported to an authoritarian regime is one less chip available to firms in the United States and allied democracies to train a better model, serve more customers, or strengthen the democratic AI ecosystem. In a supply-constrained environment, the United States and its allies should treat chip access as a strategic resource and prioritize exports to democracies over authoritarian states that are more likely to deploy AI in ways that undermine U.S. interests and democratic norms. Without location verification on chips or other monitoring mechanisms, there is no way to confirm that exported chips are not being diverted to harmful actors within these nations. Even if exported chips do not directly reach harmful actors, increasing overall chip supply in a severely chip-constrained country such as China frees up marginal compute that can be redirected to these actors.

The direct tradeoffs of chip exports to China are clear. In March 2026, when regulatory uncertainty stalled H200 sales to China, NVIDIA redirected TSMC capacity from H200 production to its next-generation Vera Rubin chips, which had confirmed orders from OpenAI, Google, and other American firms. As one person familiar with the decision put it: “Nvidia has to move on to what it can achieve with certainty, especially when there’s a shortage of supply for its advanced stuff.” Less-advanced AI chips like NVIDIA’s H200 consume the same constrained manufacturing capacity as current frontier AI chips (including TSMC wafer capacity, HBM, and advanced packaging), meaning they too come at a direct cost to U.S. and allied supply. The Bureau of Industry and Security rule permitting the sale of H200 chips to China requires exporters to certify that shipments will not divert capacity from the U.S. market. But when AI chips all draw on the same finite pool of upstream capacity, it is difficult to see how any company could credibly certify that chip exports to China do not come at the expense of American supply.

Some stakeholders argue that allowing limited chip sales to China can slow domestic Chinese chip development by keeping Chinese companies dependent on American hardware. However, Beijing is already pouring billions into domestic chip production, and chips alone do not create lasting lock-in. While China is currently constrained in the quantity and quality of AI chips that it can produce, every U.S. chip sent to China supports the buildout of a competing AI ecosystem that will eventually challenge American firms in third-country markets. Continued sales risk eroding the very advantage that the AI Exports Program and Pax Silica partnerships are designed to secure.

Exporting the American AI Stack Remains a Critical Imperative

Blocking AI chip sales to China does not mean the United States should hoard them from the world. Exporting AI chips to allies directly benefits U.S. companies and can reinforce America’s AI advancements. Most chip exports involve American firms building data centers abroad, which in turn allocate that compute however is most effective. For instance, they can use the compute to deploy AI services to consumers to generate revenue, advance internal R&D, or generate synthetic data for training new AI models. When AI industry executives and analysts identify chips as the tightest constraint, they are already factoring in data center buildouts abroad and in allied countries. Restricting exports to allies would threaten that capacity, potentially making the availability of powered data centers the binding constraint again on the AI buildout.

Additionally, exporting AI and its requisite computing power is vital to America’s AI leadership. Failing to export American AI while China ramps up capacity to eventually serve third markets risks repeating the mistake of 5G, where the United States pioneered the underlying technology but ceded global deployment to Huawei. This error took years and billions of dollars to begin unwinding. The United States should export strategically in line with the AI Exports Program, locking in an ecosystem advantage over Chinese competitors and deepening partnerships with allies to secure long-term leadership.

Smuggling Matters More When Chips Are Scarce

Chip shortages add urgency to countering chip smuggling. Every smuggled chip means fewer chips available for the United States and its allies, undermining America’s AI progress at home and ability to export the American tech stack to the world. Washington needs to step up enforcement, including through technical innovations that make tracking cheaper and more scalable. The bipartisan Chip Security Act, which passed the House Foreign Affairs Committee unanimously in March 2026, would require location verification mechanisms for exported AI chips and mandatory reporting of suspected diversions. Passing this bill would make export controls and their enforcement more effective.

Washington should also consider updating its approach to export controls to better protect scarce resources, specifically by adopting a whitelisting approach for HBM exports. HBM is an unusually well-suited target for export controls because the number of legitimate buyers is extremely small. Only a handful of companies worldwide need raw HBM: those designing GPUs, AI accelerators, and other high-performance chips. A whitelist approach, restricting HBM sales to approved chip designers, would ensure that HBM ends up integrated into chips that serve the U.S. and allied market rather than being diverted to competitors.

Coordinating with Allies Can Help Strengthen Supply

U.S. and allied governments are already subsidizing and shaping semiconductor supply chains. But without joint planning, there’s a risk that every government will chase the same bottlenecks while underinvesting in less visible upstream inputs. This could result in wasteful duplication in some areas and persistent gaps in others. Pax Silica can serve as the coordination layer, aligning allied investment so that each country builds on existing strengths and collectively delivers an allied semiconductor base that is sufficient and not vulnerable to any single point of failure.

Given the constraints on AI chip supply, the United States has both greater leverage and greater reason to ensure every chip is put to its highest-value use.

The United States and its allies hold a commanding lead in AI chips, but it will not last forever. Given the shortage of AI chips, the United States should export aggressively to countries that strengthen its AI leadership and restrict exports to those that would build a competing ecosystem. Washington has a narrowing window in which to shape the global AI infrastructure buildout, deepen allied partnerships that reinforce long-term U.S. leadership, and anchor AI development in democratic values. Leveraging that window requires government and industry to work together and treat scarce chips as the strategic resource they are.

Appendix: Statements from AI Company Leaders on Compute Constraints

Deploying AI compute requires several inputs, including the AI chips themselves and powered data centers to run the chips. These quotes reveal overall compute constraints and specify the AI chips themselves as the tightest constraint.

AI LeaderStatementDate
OpenAI President Greg Brockman“There’s not going to be enough compute in the world to meet the demand.”

“I think that we’re in a world that’s compute scarce. We need more compute. We need more chips.”

“Every team has people whose productivity is directly tied to how much compute they have. The most intense internal conversations we have are about that allocation.”76
April 23, 2026
OpenAI Chief Operating Officer Brad Lightcap“Right now [the bottleneck] is memory. . . . It’s been power in the past.”77March 24, 2026
Broadcom Director of Product Marketing Natarajan Ramachandran“We are seeing that TSMC is hitting [production capacity] limits. . . . They will be increasing the capacity to 2027, but that has become a bottleneck, or that has kind of choked ⁠the supply chain in 2026.”78March 24, 2026
NVIDIA CEO Jensen Huang“Anthropic is making great money. OpenAI is making great money. If they could have twice as much compute, the revenues would go up four times as much. These guys are so compute constrained, and the demand is so incredibly great.”79February 6, 2026
OpenAI CEO Sam Altman“It [the bottleneck] goes back and forth. Right now, again, it’s chips.”80February 5, 2026
xAI CEO Elon Musk“We’ll take as many chips as our suppliers will give us. I’ve actually said this to TSMC and Samsung and Micron: ‘Please build more fabs faster.’”81February 5, 2026
Amazon CEO Andy Jassy“I think every provider would tell you, including us, that we could actually grow faster if we had all the supply that we could take.”82February 5, 2026
Google CEO Sundar Pichai“We’ve been supply constrained even as we’ve been ramping up our capacity.”83February 4, 2026
Meta Chief Financial Officer (CFO) Susan Li“We do continue to be capacity constrained. . . . Demands for compute resources across the company have increased even faster than our supply.”84January 28, 2026
TSMC CEO C. C. Wei“Today, from my point of view, still the bottleneck is TSMC’s wafer supply, not the power consumption.”

“So, today, their [TSMC customers’] message to me is—silicon from TSMC is a bottleneck, and asked me not to pay attention to all others, because [we] have to solve the silicon bottleneck first.”85
January 15, 2026
xAI CEO Elon Musk“We’ve got two choices: Hit the chip wall or make a fab.”86January 6, 2026
Micron Executive Vice President Manish Bhatia“This is the most significant disconnect between demand and supply in terms of magnitude as well as time horizon that we’ve experienced in my 25 years in the industry.”87December 17, 2025
Micron CEO Sanjay Mehrotra“We believe that the aggregate industry supply [of chips] will remain substantially short of the demand for the foreseeable future.”88December 17, 2025
OpenAI CEO Sam Altman“Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint.”89November 6, 2025
Meta CEO Mark Zuckerberg“To date, we keep on seeing this pattern where we build some amount of infrastructure to what we think is an aggressive assumption. And then we keep on having more demand to be able to use more compute.”90October 29, 2025
Microsoft CFO Amy Hood“This quarter, demand again exceeded supply across workloads, even as we brought more capacity online.”91October 29, 2025

About the Authors

James Sanders is a research associate for the Technology and National Security Program at CNAS. His research focuses on the implications of AI, including how compute policy can be used to manage the risks and opportunities of advanced AI systems. Before CNAS, Sanders worked for Epoch AI studying trends in AI capabilities and infrastructure and worked as a quant trader at Susquehanna International Group. He holds a BA in mathematics from Rice University.

Janet Egan is a senior fellow and deputy director of the Technology and National Security Program at CNAS. Her research focuses on the national security implications of AI, including how compute policy can be used to manage the risks and opportunities of advanced AI systems. Prior to joining CNAS, Egan was a director in the Australian Department of the Prime Minister and Cabinet. Egan holds a master’s in public policy from the Harvard Kennedy School and a BA from Monash University in Australia.

Rory Madigan is a 2026 spring research fellow with the Cambridge Boston Alignment Initiative. He was previously an associate at the private investment firm Crane Partners. Madigan holds a holds a master’s in global affairs from Tsinghua University in Beijing and a BA from Columbia University.

About the Technology and National Security Program

The Technology and National Security Program produces cutting-edge research and recommendations to help U.S. and allied policymakers responsibly win and manage the great power competition with China over critical and emerging technologies. The escalating U.S.–China competition in AI, biotechnologies, next-generation information and communications technologies and digital infrastructure, and quantum information sciences will have far-reaching implications for U.S. foreign policy and national and economic security.

The program focuses on high-impact technology areas with in-depth, evidence-based analysis to assess U.S. leadership vis-à-vis China, anticipate technology-related risks to security and democratic values, and outline bold but actionable steps for the United States and its allies to lead in responsible technology development, adoption, and governance. A key focus of the program is convening the technology and policy communities to bridge gaps, exchange perspectives, and together develop solutions.

Acknowledgments

The authors would like to acknowledge Georgia Adamson, Vivek Chilukuri, Liam Epstein, Tim Fist, Isabel Juniewicz, Michelle Nie, Paul Scharre, Josh You, and others who provided valuable feedback and insights throughout the development of this report. The views expressed in this paper do not necessarily represent those of acknowledged individuals nor affiliated organizations. The authors are also grateful for the CNAS publications and communications teams for their support and editing. This paper was made possible with the generous support of Coefficient Giving.

As a research and policy institution committed to the highest standards of organizational, intellectual, and personal integrity, CNAS maintains strict intellectual independence and sole editorial direction and control over its ideas, projects, publications, events, and other research activities. CNAS does not take institutional positions on policy issues and the content of CNAS publications reflects the views of their authors alone. In keeping with its mission and values, CNAS does not engage in lobbying activity and complies fully with all applicable federal, state, and local laws. CNAS will not engage in any representational activities or advocacy on behalf of any entities or interests and, to the extent that the Center accepts funding from non-U.S. sources, its activities will be limited to bona fide scholastic, academic, and research-related activities, consistent with applicable federal law. The Center publicly acknowledges on its website annually all donors who contribute.

  1. Isabel Juniewicz, Hyperscaler Capex Has Quadrupled since GPT-4s Release (Epoch AI, February 26, 2026), https://epoch.ai/data-insights/hyperscaler-capex-trend/.
  2. Nick Patience, AI Capex 2026: The $690B Infrastructure Sprint (Futurum Group, February 12, 2026), https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/; Michael Cembalest, Smothering Heights: Is the Largest Moat in Market History Indestructible? (J. P. Morgan Asset & Wealth Management, January 1, 2026), https://am.jpmorgan.com/content/dam/jpm-am-aem/global/en/insights/eye-on-the-market/smothering-heights-amv.pdf.
  3. Cy McGeady et al., The Electricity Supply Bottleneck on U.S. AI Dominance (Center for Strategic and International Studies, March 3, 2025), https://www.csis.org/analysis/electricity-supply-bottleneck-us-ai-dominance.
  4. Sam Altman, “TBPN’s Run of Show: Chip Bottleneck vs. Energy Bottleneck,” TBPN, podcast, February 6, 2026, https://tbpn.substack.com/p/tbpns-run-of-show-chip-bottleneck.
  5. Georgia Adamson and Tim Fist, When Do More AI Chips for China Mean Fewer for the United States? (Institute for Progress, March 27, 2026), https://ifp.org/ai-chip-supply-diversion/.
  6. Jaime Sevilla and Edu Roldán, Training Compute of Frontier AI Models Grows by 4-5x per Year (Epoch AI, May 28, 2024), https://epoch.ai/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year/.
  7. Ben Cottier et al., LLM Inference Prices Have Fallen Rapidly but Unequally across Tasks (Epoch AI, March 12, 2025), https://epoch.ai/data-insights/llm-inference-price-trends/.
  8. Thariq (@trq212), “To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before,” X, March 26, 2026, https://x.com/trq212/status/2037254607001559305.
  9. Sundar Pichai, Alphabet 2025 Q4 Earnings Call Transcript (Alphabet Inc., February 4, 2026), https://abc.xyz/investor/events/event-details/2026/2025-Q4-Earnings-Call-2026-Dr_C033hS6/default.aspx.
  10. Robi Rahman, Performance per Dollar Improves around 30% Each Year (Epoch AI, October 23, 2024), https://epoch.ai/data-insights/price-performance-hardware/.
  11. Dylan Patel, “Deep Dive on the 3 Big Bottlenecks to Scaling AI Compute,“ Dwarkesh Podcast, podcast, March 13, 2026, https://www.dwarkesh.com/p/dylan-patel.
  12. Doug O'Laughlin et al., The Great GPU Shortage—Rental Capacity—Launching Our H100 1 Year Rental Price Index (SemiAnalysis, February 5, 2026), https://newsletter.semianalysis.com/p/the-great-gpu-shortage-rental-capacity; Carmen Li, “A100 vs H100: When GPU Prices Stop Dancing in Sync,” Medium, October 6, 2025, https://medium.com/@cli_87015/a100-vs-h100-when-gpu-prices-stop-dancing-in-sync-119c23d35b50; Carmen Li, “GPU Market Update (2024–2025): Index Repricing and Convergence in H100 vs. A100,” Medium, November 4, 2025, https://medium.com/@cli_87015/gpu-market-update-2024-2025-index-repricing-and-convergence-in-h100-vs-a100-6ee537469f62; Carmen Li, “H100 Price Spike: Understanding the 10% Surge in GPU Rental Costs,” Silicon Data Blog, January 9, 2026, https://www.silicondata.com/blog/h100-price-spike; Carmen Li (@carmenli), “Jensen Huang at GTC today: “Spot pricing is skyrocketing"— not just the latest generation, but two generations old. Our data confirms it. The Silicon Data B200 index has risen from ~$4.40/hr in January to over $5.04/hr today. That's 15%+ in under three months — and accelerating,” March 16, 2026, https://x.com/carmenli/status/2033644883089506331.
  13. Wenee Lee, “Broadcom Flags Supply Constraints, Says TSMC Capacity a Bottleneck,” Reuters, March 24, 2026, https://www.reuters.com/world/asia-pacific/broadcom-flags-supply-constraints-says-tsmc-capacity-bottleneck-2026-03-24/.
  14. C. C. Wei, TSMC Q4 2025 Earnings Call Transcript (Taiwan Semiconductor Manufacturing Company, January 15, 2026), https://investor.tsmc.com/english/encrypt/files/encrypt_file/reports/2026-01/51d09df96cd89ac19d65af39032b038dc2896a24/TSMC%204Q25%20Transcript.pdf.
  15. Scott Jones et al., Surviving the Silicon Storm: Why the Automotive Industry Is the Hardest Hit and How Automakers—and Other Chip Buyers—Can Prepare for Future Semiconductor Shortages (KPMG, 2021), https://assets.kpmg.com/content/dam/kpmg/br/pdf/2021/06/automotive-semiconductor-shortage.pdf.
  16. AlixPartners, “Shortages Related to Semiconductors to Cost the Auto Industry $210 Billion in Revenues This Year, Says New AlixPartners Forecast,” press release, September 23, 2021, https://www.alixpartners.com/newsroom/press-release-shortages-related-to-semiconductors-to-cost-the-auto-industry-210-billion-in-revenues-this-year-says-new-alixpartners-forecast/.
  17. Simon Hinds, “How Misguided Orders Disrupt the Semiconductor Market,” Altium, September 1, 2025, https://resources.altium.com/p/misguided-orders-semiconductor-market.
  18. Dylan Patel et al., Memory Mania: How a Once-in-Four-Decades Shortage Is Fueling a Memory Boom (SemiAnalysis, February 6, 2026), https://newsletter.semianalysis.com/p/memory-mania-how-a-once-in-four-decades.
  19. Hwang Min-gyu, “Samsung Electronics Prepares for Memory Downturn Risk,“ Chosun Daily, March 13, 2026, https://www.chosun.com/english/industry-en/2026/03/13/HVY7KVVDTBE2FHYWXFIOBHR5KU/.
  20. Isabel Hilton, “Taiwan Makes the Majority of the World's Computer Chips: Now It’s Running Out of Electricity,” Wired, October 4, 2024, https://www.wired.com/story/taiwan-makes-the-majority-of-the-worlds-computer-chips-now-its-running-out-of-electricity/.
  21. Wei, TSMC Q4 2025 Earnings Call Transcript; Robyn Klingler-Vidra, “How Taiwan Came to Dominate the Global Chip Industry,” The Conversation, February 9, 2026, https://theconversation.com/how-taiwan-came-to-dominate-the-global-chip-industry-276939.
  22. Sravan Kundojjala et al., Apple-TSMC: The Partnership That Built Modern Semiconductors (SemiAnalysis, January 8, 2026), https://newsletter.semianalysis.com/p/apple-tsmc-the-partnership-that-built.
  23. Una Hajdari, “World's Biggest Chipmaker TSMC Doubles Down on AI, Sees Profit Lift,” Euronews, January 15, 2026, https://www.euronews.com/business/2026/01/15/worlds-biggest-chipmaker-tsmc-doubles-down-on-ai-sees-profit-lift; “Memory Price Rally May Run Past 2028 as Samsung, SK Hynix Reportedly Cautious on Expansion,” TrendForce, December 2, 2025, https://www.trendforce.com/news/2025/12/02/news-memory-price-rally-may-run-past-2028-as-samsung-sk-hynix-reportedly-cautious-on-expansion/.
  24. Juniewicz, Hyperscaler Capex Has Quadrupled since GPT-4's Release; “Financial Statements,” Capital Expenditure Data for Micron Technology, TSMC, and SK Hynix Wall St Journal Market Data, accessed April 17, 2026, https://www.wsj.com/market-data/; Charlotte Trueman, “TSMC Announces 2026 Capex Spend of $56bn as CEO Dismisses ‘Bubble’ Concerns but Warns the Chipmaker Must Invest Carefully,” DatacenterDynamics, January 15, 2026, https://www.datacenterdynamics.com/en/news/tsmc-announces-2026-capex-spend-of-56bn-after-posting-eighth-consecutive-quarter-of-growth/; Sanjay Mehrotra, Micron Technology Fiscal Q2 2026 Financial Results (Micron Technology Inc., March 18, 2026), https://investors.micron.com/static-files/9c0becf5-df56-4eec-bd67-453dda68b273; and “SK Hynix Upgraded to ‘BBB+’ on Memory Sales,” S&P Global Ratings, February 5, 2026, https://www.spglobal.com/ratings/en/regulatory/article/-/view/type/HTML/id/3513063.
  25. Ian King et al., “Anthropic Tops $30 Billion Run Rate, Seals Broadcom Deal,” Bloomberg, April 6, 2026, https://www.bloomberg.com/news/articles/2026-04-06/broadcom-confirms-deal-to-ship-google-tpu-chips-to-anthropic.
  26. Qianer Liu, “TSMC Can't Make AI Chips Fast Enough,” The Information, January 14, 2026, https://www.theinformation.com/articles/tsmc-make-ai-chips-fast-enough; Rich Duprey, “Here's Why Taiwan Semiconductor Manufacturing Holds the Keys to AI's Explosive Growth,” 24/7 Wall St., January 3, 2026, https://247wallst.com/investing/2026/01/03/heres-why-taiwan-semiconductor-manufacturing-holds-the-keys-to-ais-explosive-growth/.
  27. “Asia’s Chipmakers Reportedly Eye $136B Spend in 2026, Up 25% YoY, Spanning Foundry and Memory,” TrendForce, March 4, 2026, https://www.trendforce.com/news/2026/03/04/news-asias-chipmakers-reportedly-set-to-spend-136b-in-2026-up-25-yoy-spanning-foundry-and-memory/.
  28. Chang Chien-chung et al., “TSMC’s Fab 2 in Arizona to Begin Mass Production in 2nd Half of 2027,” Focus Taiwan, January 15, 2026, https://focustaiwan.tw/business/202601150025.
  29. Reuters, “Musk Says Tesla's Mega AI Chip Fab Project to Launch in Seven Days,” Reuters, March 14, 2026, https://www.reuters.com/business/autos-transportation/musk-says-teslas-gigantic-chip-fab-project-launch-seven-days-2026-03-14/.
  30. Margaret Kindling, “Chip Industry Fun Facts: Super Clean(room), Microchip Sprint, Computing Power Moonshot and Microscopic Scale,” SEMI, October 9, 2023, https://www.semi.org/en/blogs/semi-news/chip-industry-fun-facts-super-cleanroom-microchip-sprint-computing-power-moonshot-microscopic-scale.
  31. Pete Singer, “Building Fabs in the U.S. vs. Taiwan: Twice as Long, Twice as Much,” Semiconductor Digest, February 18, 2025, https://www.semiconductor-digest.com/building-fabs-in-the-u-s-vs-taiwan-twice-as-long-twice-as-much/.
  32. Luke James, “TSMC Says Advanced-Node Capacity Falls ‘About Three Times Short’ of AI Demand,” Tom’s Hardware, November 25, 2025, https://www.tomshardware.com/tech-industry/semiconductors/tsmc-csays-advanced-node-capacity-falls-short-of-ai-demand; “Data on AI Chip Sales,” Epoch AI, accessed April 17, 2026, https://epoch.ai/data/ai-chip-sales.
  33. James, “TSMC Says Advanced-Node Capacity Falls ‘About Three Times Short’ of AI Demand.”
  34. James, “TSMC Says Advanced-Node Capacity Falls ’About Three Times Short‘ of AI Demand.”
  35. Ivan Chiam et al., The Great AI Silicon Shortage (SemiAnalysis, March 12, 2026), https://newsletter.semianalysis.com/p/the-great-ai-silicon-shortage.
  36. “Nvidia Introduces New AI Platform Featuring Six Chips Made by TSMC,” Taipei Times, January 7, 2026, https://www.taipeitimes.com/News/biz/archives/2026/01/07/2003850159; Max Weinbach, “Google Cloud Next 2025: Ironwood TPU, Agent Toolkits, and Google's Vertical Advantage,” Creative Strategies, April 9, 2025, https://creativestrategies.com/research/google-cloud-next-2025-ironwood-tpu-agent-toolkits-and-googles-vertical-advantage/.
  37. Yoo Ji-han, “TSMC’s 2028 Capacity Full, Samsung Foundry Emerges as Alternative,” Chosun Daily, March 30, 2026, https://www.chosun.com/english/industry-en/2026/03/30/3FDTIFVVFJEMXD4XSUFFASEGME/.
  38. Bill Wiseman et al., Semiconductors Have a Big Opportunity—but Barriers to Scale Remain (McKinsey & Company, April 21, 2025), https://www.mckinsey.com/industries/semiconductors/our-insights/semiconductors-have-a-big-opportunity-but-barriers-to-scale-remain.
  39. Yoo, “TSMC’s 2028 Capacity Full, Samsung Foundry Emerges as Alternative.”
  40. Ivan Chiam et al., The Great AI Silicon Shortage (SemiAnalysis, March 12, 2026), https://newsletter.semianalysis.com/p/the-great-ai-silicon-shortage.
  41. Josh You et al., Global AI Computing Capacity Is Doubling Every 7 Months (Epoch AI, January 9, 2026), https://epoch.ai/data-insights/ai-chip-production.
  42. Anton Shilov, “Apple Concedes It Is Constrained by TSMC's Supply of Advanced Chips,” Tom's Hardware, February 2, 2026, https://www.tomshardware.com/tech-industry/semiconductors/apple-concedes-it-is-constrained-by-tsmcs-supply-of-advanced-chips-storage-and-memory-are-also-in-short-supply-firm-isnt-projecting-supply-conditions-beyond-the-second-quarter; Lee, “Broadcom Flags Supply Constraints, Says TSMC Capacity a Bottleneck”; and C. C. Wei, TSMC Q1 2026 Earnings Call Transcript.
  43. Michael Davies et al., Efficient LLM Inference: Bandwidth, Compute, Synchronization, and Capacity Are All You Need (arXiv, July 18, 2025), https://arxiv.org/html/2507.14397v1.
  44. Luke Emberson, Total AI Chip Memory Bandwidth Has Grown 4.1x per Year, Now Reaching 70 Million Terabytes per Second (Epoch AI, March 24, 2026), https://epoch.ai/data-insights/hbm-shipped.
  45. Ivan Chiam et al., The Great AI Silicon Shortage (SemiAnalysis, March 12, 2026), https://newsletter.semianalysis.com/p/the-great-ai-silicon-shortage.
  46. “ISPPDR47 Index Pricing Data,” Bloomberg L.P., Bloomberg Terminal, accessed April 1, 2026.
  47. Jowi Morales, “Don't Wait If You’re Planning to Upgrade Your RAM or SSD, Kingston Rep Warns,” Tom’s Hardware, December 16, 2025, https://www.tomshardware.com/pc-components/ram/dont-wait-if-youre-planning-to-upgrade-your-ram-or-ssd-kingston-rep-warns-says-prices-will-continue-to-go-up-nand-costs-up-246-percent.
  48. Emma Powell, “Sony PlayStation 6 Launch ‘to Be Delayed’ by Shortage of Chips,” The Times, February 17, 2026, https://www.thetimes.com/business/companies-markets/article/sony-playstation-6-launch-delayed-chips-shortage-rztj9r7mc; Jenna Benchetrit, “Why a Memory Chip Shortage Is Wreaking Havoc on the Consumer Electronics Industry,” CBC News, February 27, 2026, https://www.cbc.ca/news/business/ram-shortage-consumer-electronics-9.7102991.
  49. Meghan Bobrowsky, “Meta Will Run Some Servers Longer in Response to Memory Shortage,” Wall Street Journal, April 29, 2026, https://www.wsj.com/tech/meta-will-run-some-servers-longer-in-response-to-memory-shortage-9bb75737.
  50. Dylan Patel et al., Memory Mania.
  51. “Data on Machine Learning Hardware,” memory storage data for AI chips, Epoch AI, accessed April 17, 2026, https://epoch.ai/data/machine-learning-hardware.
  52. Sohee Kim, “SK Hynix Cuts Capex in Half with ‘Unprecedented’ Demand Drop,” Bloomberg, October 25, 2022, https://www.bloomberg.com/news/articles/2022-10-25/sk-hynix-to-halve-2023-capital-spending-after-profit-plunge.
  53. Dylan Patel et al., The Memory Wall: Past, Present, and Future of DRAM (SemiAnalysis, September 2, 2024), https://newsletter.semianalysis.com/p/the-memory-wall.
  54. “Micron Gives Rosy Sales Forecast after AI Boom Spurs Demand,” Bloomberg, December 17, 2025, https://www.bloomberg.com/news/articles/2025-12-17/micron-gives-rosy-sales-forecast-after-ai-boom-spurs-demand.
  55. Sanjay Mehrotra, Micron Technology Fiscal Q1 2026 Earnings Call Transcript (Micron Technology Inc., December 17, 2025), https://investors.micron.com/static-files/088991c5-a249-4f66-a0a6-258d9b66f3f9; Heyong et al., “South Korea's SK Group Chairman Expects Chip Wafer Shortage to Last until 2030, Eyes US ADR Listing,” Reuters, March 16, 2026, https://www.reuters.com/world/asia-pacific/south-koreas-sk-group-chairman-expects-chip-wafer-shortage-last-until-2030-eyes-2026-03-16/.
  56. “How Agentic AI Is Reshaping the CPU:GPU Ratio,” TrendForce, April 14, 2026, https://insights.trendforce.com/p/agentic-ai-cpu-gpu; Aaron Lee, “CPU Shortage More Acute than Memory; Industry Awaits Intel 18A Yield Improvement,” DigiTimes, April 17, 2026, https://www.digitimes.com/news/a20260417PD200/cpu-intel-pc-amd-industrial.html.
  57. Dylan Patel et al., AI Capacity Constraints: CoWoS and HBM Supply Chain (SemiAnalysis, July 5, 2023), https://newsletter.semianalysis.com/p/ai-capacity-constraints-cowos-and.
  58. Ray Wang, “Too Important to Ignore: Unpacking Advanced Packaging for AI Semiconductor,” Futurum Group, press release, August 27, 2025, https://futurumgroup.com/press-release/too-important-to-ignore-unpacking-advanced-packaging-for-ai-semiconductor/.
  59. Chiam et al., The Great AI Silicon Shortage.
  60. Duprey, “Here's Why Taiwan Semiconductor Manufacturing Holds the Keys to AI's Explosive Growth.”
  61. Satya Nadella, “All Things AI with Sam Altman and Satya Nadella: A Halloween Special,” video, BG2 Pod, October 31, 2025, 1 hr., 14 min., 21 sec., https://www.youtube.com/watch?v=Gnl833wXRz0; Alicia Tang and Oma Seddiq, “OpenAI's Lightcap Sees Memory Shortage as Bottleneck Risk for AI,” Bloomberg, March 24, 2026, https://www.bloomberg.com/news/articles/2026-03-24/openai-s-lightcap-sees-memory-shortage-as-bottleneck-risk-for-ai; and Altman, “TBPN’s Run of Show.”
  62. Chiam et al., The Great AI Silicon Shortage.
  63. Patience, AI Capex 2026; Cembalest, Smothering Heights.
  64. Patience, AI Capex 2026; Cembalest, Smothering Heights; Rep. Tom Cole, “Explanatory Statement Regarding H.R. 6938, Commerce, Justice, Science; Energy and Water Development; and Interior and Environment Appropriations Act, 2026,” Congressional Record 172, no. 5 (January 8, 2026): H255, https://www.congress.gov/congressional-record/volume-172/issue-5/house-section/article/H255-1.
  65. Zijing Wu, “Nvidia Stops Production of Chips Intended for Chinese Market,” Financial Times, March 5, 2026, https://www.ft.com/content/47f1cf56-209f-46fb-a437-f769b9ccb2cb.
  66. Wu, “Nvidia Stops Production of Chips Intended for Chinese Market.”
  67. Adamson, When Do More AI Chips for China Mean Fewer for the United States?
  68. Janet Egan and James Sanders, “CNAS Insights: Unpacking the H200 Export Policy,” Center for a New American Security, January 16, 2026, https://www.cnas.org/publications/commentary/cnas-insights-unpacking-the-h200-export-policy.
  69. Janet Egan, “Selling AI Chips Won't Keep China Hooked on U.S. Technology,” Center for a New American Security, September 3, 2025, https://www.cnas.org/publications/commentary/selling-ai-chips-wont-keep-china-hooked-on-u-s-technology.
  70. Chris McGuire, “China's AI Chip Deficit: Why Huawei Can't Catch Nvidia and U.S. Export Controls Should Remain,” Council on Foreign Relations, December 15, 2025, https://www.cfr.org/articles/chinas-ai-chip-deficit-why-huawei-cant-catch-nvidia-and-us-export-controls-should-remain.
  71. “Huawei 5G Decision ‘Will Cost up to £2bn,’” BBC News, July 14, 2020, https://www.bbc.com/news/uk-53406464.
  72. John T. Watts, “The Battle for 5G Leadership Is Global and the US Is Behind: The White House‘s New Strategy Aims to Correct That,” Atlantic Council, May 23, 2020, https://www.atlanticcouncil.org/blogs/new-atlanticist/the-battle-for-5g-leadership-is-global-and-the-us-is-behind-the-white-houses-new-strategy-aims-to-correct-that/.
  73. Chip Security Act, H.R. 3447, 119th Cong. (2026), https://www.congress.gov/bill/119th-congress/house-bill/3447/text.
  74. Venkat Somala, Advanced Packaging and HBM, Not Logic Dies, Were the Bottlenecks on AI Chip Production in 2025 (Epoch AI, March 12, 2026), https://epoch.ai/data-insights/ai-chip-supply-chain-constraints/.
  75. Mireya Solís and Mathieu Duchâtel, “The Renaissance of the Japanese Semiconductor Industry,” The Brookings Institution, June 3, 2024, https://www.brookings.edu/articles/the-renaissance-of-the-japanese-semiconductor-industry/; Martin Chorzempa, “The US and Korean CHIPS Acts Are Spurring Investment but at a High Cost,” Peterson Institute for International Economics, June 10, 2024, https://www.piie.com/blogs/realtime-economics/2024/us-and-korean-chips-acts-are-spurring-investment-high-cost/.
  76. Tae Kim, “An Interview with OpenAI President Greg Brockman: ‘There’s Not Going to Be Enough Compute,’” Key Context, April 2026, https://taekim.substack.com/p/an-interview-with-openai-president.
  77. Tang and Seddiq, “OpenAI’s Lightcap Sees Memory Shortage as Bottleneck Risk for AI.”
  78. Lee, “Broadcom Flags Supply Constraints, Says TSMC Capacity a Bottleneck.”
  79. Jensen Huang, “AI Is Going to Fundamentally Change How We Compute Everything,” video, CNBC, February 6, 2026, 8 min., 34 sec., https://www.cnbc.com/video/2026/02/06/nvidia-ceo-jensen-huang-ai-is-going-to-fundamentally-change-how-we-compute-everything.html.
  80. Altman, “TBPN’s Run of Show: Chip Bottleneck vs. Energy Bottleneck.”
  81. Elon Musk, “Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method,” Dwarkesh Podcast, podcast, February 5, 2026, https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus.
  82. Andy Jassy, Amazon Q4 2025 Earnings Call Transcript (Amazon via Yahoo!Finance, February 5, 2026), https://finance.yahoo.com/quote/AMZN/earnings/AMZN-Q4-2025-earnings_call-406163.html.
  83. Pichai, Alphabet 2025 Q4 Earnings Call Transcript.
  84. Susan Li, Meta Platforms Q4 2025 Earnings Call Transcript (Meta Platforms Inc., January 28, 2026), https://s21.q4cdn.com/399680738/files/doc_financials/2025/q4/META-Q4-2025-Earnings-Call-Transcript.pdf.
  85. Wei, TSMC Q4 2025 Earnings Call Transcript.
  86. Elon Musk, “Elon Musk on AGI Timeline, US vs. China, Job Markets, Clean Energy and Humanoid Robots,” video, Moonshots with Peter Diamandis, January 6, 2026, 2 hr., 52 min., 10 sec., https://www.youtube.com/watch?v=RSNuB9pj9P8.
  87. “Micron Gives Rosy Sales Forecast after AI Boom Spurs Demand,“ Bloomberg, December 17, 2025, https://www.bloomberg.com/news/articles/2025-12-17/micron-gives-rosy-sales-forecast-after-ai-boom-spurs-demand.
  88. Mehrotra, Micron Technology Fiscal Q1 2026 Earnings Call Transcript.
  89. Sam Altman (@sama), “I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions,” X, November 6, 2025, https://x.com/sama/status/1986514377470845007.
  90. Mark Zuckerberg, Meta Platforms Q3 2025 Earnings Call Transcript (Meta Platforms Inc., October 29, 2025), https://s21.q4cdn.com/399680738/files/doc_financials/2025/q3/META-Q3-2025-Earnings-Call-Transcript.pdf.
  91. Amy Hood, Microsoft Fiscal Year 2026 First Quarter Earnings Call Transcript (Microsoft Corp., October 29, 2025), https://www.microsoft.com/en-us/investor/events/fy-2026/earnings-fy-2026-q1.

Authors

  • James Sanders

    Research Associate, Technology and National Security Program

    James Sanders is a research associate for the Technology and National SecurityProgram at the Center for a New American Security (CNAS). His research focuseson the implications...

  • Janet Egan

    Senior Fellow and Deputy Director, Technology and National Security Program

    Janet Egan is a senior fellow and deputy director of the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on the...

  • Rory Madigan

    CNAS Contributing Author

    Rory Madigan is a 2026 spring research fellow with the Cambridge Boston Alignment Initiative. He was previously an associate at the private investment firm Crane Partners. Mad...

View All Reports View All Articles & Multimedia