DataCentersX > Sites > Bottleneck Atlas
Data Center Business Models
This atlas ranks the highest-leverage chokepoints constraining data center buildout across hyperscale cloud, AI factories, edge infrastructure, sovereign cloud, and enterprise compute. A bottleneck here means a throughput limiter that is slow to expand because of physics, capital intensity, manufacturing concentration, qualification cycles, regulatory approvals, or grid connection constraints. Rankings reflect cross-program leverage, persistence, and difficulty of substitution.
The data center supply chain is now constrained at a different layer than at any time in its history. For two decades, server and IT hardware availability set the pace. Today, half of planned 2026 US data center capacity is delayed or canceled, and the binding constraint has shifted decisively to electrical equipment, grid interconnection, and AI accelerator memory supply. Understanding where the chain actually breaks is prerequisite to understanding why hyperscaler capex commitments and announced deployment timelines diverge so dramatically.
This page is the data center demand-side companion to the SemiconductorX Bottleneck Atlas, which covers where chip supply chains break upstream, and the ElectronsX Bottleneck Atlas, which covers electrification supply chains. The three atlases share a common framework but each covers a different segment of the AI-industrial buildout.
Master ranking
These bottlenecks span the full data center supply chain from grid interconnection through compute hardware. Power equipment dominates the top of the ranking because the chain has shifted from compute-limited to power-limited; AI accelerator memory and packaging follow because they directly gate the GPUs that fill the buildouts that finally get power.
| Rank | Bottleneck | Where it bites | What the bottleneck really is | Constraint type | Geographic concentration |
|---|---|---|---|---|---|
| 1 | Large power transformers (HV/MV) | Every data center connecting more than ~50 MW; AI factories and gigawatt campuses worst affected | Pre-pandemic lead times of 24-30 months have stretched to 128 weeks for power transformers and 144 weeks for generator step-up units; some custom orders now quoted at 4-5 years. The US meets only 20% of its power transformer needs domestically; 80% of demand is imported. Approximately 50% of planned 2026 US data center capacity is delayed or canceled primarily because of electrical equipment shortages. | Manufacturing capacity + grain-oriented electrical steel + skilled labor | Hitachi Energy, Siemens Energy, GE Vernova/Prolec, ABB dominant globally; Hyundai Electric expanding US footprint; concentrated demand competing with EV, renewables, and grid replacement |
| 2 | Grid interconnection capacity | Every new large-load deployment in PJM, ERCOT, MISO, CAISO, and most utility territories | PJM is taking approximately 8 years to bring new generation online while running capacity auctions on 2-3 year forward cycles. PJM's queue had 130+ GW of pre-2024 capacity-eligible projects, of which 30 GW remained in transition processing through 2026. The 2026/27 PJM capacity auction cleared at the FERC-approved cap of $333.44/MW-day. Northern Virginia, the world's largest data center cluster, faces multi-year waits for new interconnection at all but the most competitive sites. | Study process + transmission upgrade timelines + transmission planning lag + capacity market structure | Universal across US ISOs and most international markets. PJM most acute; ERCOT comparatively faster but tightening; CAISO and ISO-NE constrained by transmission siting |
| 3 | HBM memory (HBM3E and HBM4) | Every NVIDIA Blackwell and Rubin GPU; AMD MI-series; every frontier AI training and inference deployment | SK Hynix held 53-62% HBM market share through Q3 2025. SK Hynix CFO publicly stated the company "sold out our entire 2026 HBM supply"; Micron confirmed similar 2025-2026 capacity fully booked. Server DRAM prices surged 60-70% as memory makers reallocated capacity to HBM. NVIDIA Rubin's 288 GB HBM4 per GPU at 22 TB/s aggregate bandwidth depends on HBM4 16-Hi qualification at all three suppliers. New capacity does not meaningfully improve availability until 2027. | TSV stacking process specialization + capital intensity + qualification cycles + DRAM wafer reallocation | SK Hynix (South Korea) dominant; Samsung (South Korea) accelerating; Micron (US) third with aggressive 15K wafer/month HBM4 capacity target by end-2026 |
| 4 | Switchgear, breakers, and MV electrical equipment | Every data center site; both inside-the-fence and utility-side connection equipment | Long-lead electrical equipment timelines have moved from sub-50 weeks to 90+ weeks in less than three years. Switchgear, circuit breakers, busbar systems, and medium-voltage equipment compete with the same grain-oriented electrical steel and copper supply chain as transformers. Crusoe Energy began manufacturing its own switchgear to bypass traditional lead times - a sign of how acute the constraint has become at the operator level. | Manufacturing capacity + skilled assembly labor + steel and copper materials | Eaton, Schneider Electric, ABB, Siemens, GE Vernova; some Chinese manufacturers significant in components subject to tariff exposure |
| 5 | NVIDIA AI accelerator allocation (Blackwell, Rubin) | Every AI factory, neo-cloud, and enterprise AI training program not on hyperscaler priority allocation | NVIDIA AI GPU supply remains hyperscaler-allocated with priority going to Microsoft, Meta, Google, AWS, Oracle, and CoreWeave under multi-year purchase commitments. Smaller buyers face 6-18 month lead times. The constraint is not raw foundry capacity but the cascading dependency on TSMC N3/N4, CoWoS advanced packaging, HBM, and ABF substrate - the four upstream bottlenecks each gate NVIDIA's GPU output independently. NVIDIA's AV SoC dominance (Orin/Thor) creates similar single-vendor concentration risk for AV programs. | Upstream silicon and packaging stack + customer concentration + market dominance | NVIDIA (US fabless); TSMC (Taiwan front-end); TSMC CoWoS (Taiwan); SK Hynix HBM (South Korea); ABF substrate (Japan) |
| 6 | CDUs and direct-to-chip cooling equipment | Every AI training rack at 50+ kW; GB200 NVL72 deployments; Rubin reference designs | Coolant Distribution Units are the hydraulic heart of direct-to-chip liquid cooling, and their production capacity is now the critical constraint on liquid cooling deployment. Vertiv expanded CDU manufacturing capacity 45x in 2024; Modine invested $100 million in new CDU facility. Strategic acquisitions - Schneider's $850M Motivair acquisition (October 2024), KKR's CoolIT acquisition, Vertiv's CoolTera acquisition (December 2023), Eaton's market entry - reflect the consolidation around liquid cooling as a systemic capacity constraint. Cold plate manufacturing involves multi-step specialized processes; pump and valve redesigns for non-conductive fluids extend traditional lead times. | Specialized manufacturing capacity + qualification cycles + cold plate process specialization | Vertiv, Schneider Electric/Motivair, CoolIT (KKR), Boyd Corporation, Modine - market consolidating to fewer than 10 vendors expected by end of decade |
| 7 | CoWoS and advanced packaging capacity | Every NVIDIA H/B/Rubin-series GPU; AMD MI-series; high-bandwidth memory integration; large-die AI accelerators | CoWoS interposer packaging at TSMC is a separate capacity queue from wafer starts and was the binding constraint on H100 shipments through 2023. Even as TSMC has expanded CoWoS capacity, demand from NVIDIA's Rubin generation, multiple AI ASIC programs, and AMD GPUs runs ahead of supply. Advanced packaging is now recognized industry-wide as an equal bottleneck to wafer starts for AI programs - a classification that did not exist five years ago. See: SX:CoWoS | Capital-intensive specialty manufacturing + capacity expansion lead times + qualification cycles per generation | TSMC (Taiwan, dominant); ASE (Taiwan); Amkor (US/Korea) secondary. Samsung and Intel expanding but well behind TSMC at leading-edge advanced packaging |
| 8 | Construction labor (electrical, mechanical, controls) | Every AI factory and hyperscale buildout; especially in Northern Virginia, Phoenix, Texas, and Pacific Northwest clusters | MEP (mechanical, electrical, plumbing) trades, especially licensed electricians, instrumentation technicians, and BMS commissioning specialists, are in structural shortage as multiple gigawatt campuses commission simultaneously. Construction labor compounds with equipment lead time delays - the equipment finally arrives but cannot be installed at scheduled cadence. Specialized cleanroom, gas systems, and water treatment trades are even more constrained. | Skilled trade labor pool + apprenticeship pipeline + competing infrastructure projects (chip fabs, semiconductor reshoring) | Acute in major DC clusters (Northern Virginia, Phoenix, Columbus, Dallas, Reno) and worsened by adjacent CHIPS Act fab construction in Arizona, Ohio, Texas |
| 9 | Backup generators and prime power gensets | Every data center requiring N+1 or 2N backup; gigawatt sites with onsite gas turbine prime power | Caterpillar, Cummins, Kohler, and MTU diesel generator lead times have stretched to 18-30 months for large frames (2-3 MW units typical for hyperscale). Aeroderivative and industrial gas turbines used for behind-the-meter prime power (GE LM2500/LM6000, Siemens SGT-A35, Solar Turbines) are sold out through 2027 with hyperscalers and AI operators dominating the order book. Crusoe and others have shifted to onsite gas turbine prime power because grid wait times exceed turbine lead times. | Manufacturing capacity + supply chain (engines, controls) + competing demand from oil and gas, marine, peaker plants | Caterpillar (US); Cummins (US); Kohler (US); MTU/Rolls-Royce (Germany); GE Vernova (US); Siemens Energy (Germany); Solar Turbines/Caterpillar (US) |
| 10 | BESS at data center scale | Microgrid-equipped sites, peak-shaving deployments, ride-through and UPS replacement projects | Lithium-ion battery cell supply for grid-scale BESS competes with EV demand for the same LFP and NMC cell capacity, with hyperscaler microgrid deployments now significant buyers. Tesla Megapack, Fluence, BYD, CATL, Wartsila, and Saft hold dominant integrator positions. Fire safety standards (NFPA 855, IEC 62933) and increasingly stringent permitting timelines extend deployment from 12 to 24+ months from order to commissioning. Grid-scale BESS supply is improving but tight at the gigawatt-hour scale needed for hyperscale microgrid operations. | Cell supply chain + integrator capacity + permitting and fire code compliance | Cells: CATL (China dominant), BYD (China), LG Energy Solution (Korea), Samsung SDI (Korea). Integrators: Tesla (US), Fluence (US), Wartsila (Finland), Saft (France). See: EX:BESS |
| 11 | Optical transceivers and high-speed networking silicon | AI training cluster fabric (InfiniBand and RoCE); 800G/1.6T inter-cluster links; co-packaged optics | 800G optical transceiver supply was constrained through 2024-2025 as AI training cluster fabric scaling outpaced module manufacturing capacity. Coherent (formerly II-VI), Lumentum, Innolight, Eoptolink, and Accelink dominate; Chinese suppliers face increasing US scrutiny. Co-packaged optics (CPO) for next-generation switch silicon is a separate qualification challenge. NVIDIA Mellanox/Quantum, Broadcom Tomahawk, and Cisco Silicon One drive switch demand. NVLink Switch and InfiniBand topology design at training cluster scale concentrates supply chain pressure on a small number of components. | Specialty laser and photonic component manufacturing + module assembly + China policy exposure | Coherent (US); Lumentum (US); Innolight (China); Accelink (China); Eoptolink (China). NVIDIA Mellanox, Broadcom switch silicon (US fabless, TSMC fabricated) |
| 12 | Nuclear PPA capacity (restarted reactors and SMRs) | Hyperscaler firm-power procurement; gigawatt AI campuses requiring 24/7 carbon-free baseload | Restarted reactor capacity (Three Mile Island Unit 1 / Constellation-Microsoft, Palisades / Holtec) is single-site limited and largely committed. Behind-the-meter coupling (Talen-Amazon Susquehanna, Fermi Hypergrid SMR roadmap) faces NRC and FERC regulatory uncertainty. Small modular reactor PPAs (Oklo, X-energy, Kairos with hyperscaler partners) face commercial operation dates in the 2028-2032 window with regulatory risk. Nuclear is the strategic carbon-free firm baseload but the supply chain - reactor construction, fuel enrichment, regulatory approval - cannot scale on AI-buildout timelines. | Reactor construction capacity + NRC licensing + fuel supply + transmission interconnection at restart sites | US (Constellation, Vistra, Holtec, Talen); SMR vendors largely US-based with Westinghouse, GE Hitachi, and X-energy designs leading. See: EX:Nuclear Energy |
| 13 | Water rights and withdrawal permits | Evaporative-cooled hyperscale and AI sites; arid-region deployments; community-permit-sensitive markets | Water-stressed regions (Arizona, Nevada, Texas, Utah, Spain, parts of India) face hardening permitting limits on data center withdrawal at the same time AI cooling demand rises. Several US jurisdictions have enacted water withdrawal caps for new data center builds. Reputational and community opposition has gated multiple hyperscale projects regardless of engineering compliance. Operators are increasingly forced to specify dry or hybrid heat rejection at PUE penalty, and to negotiate reclaimed-water agreements for makeup supply. | Regulatory permitting + community acceptance + climate-driven water scarcity + reputational exposure | US Southwest (Arizona, Nevada), Texas, Utah; Iberian peninsula; parts of India and APAC. Less constrained in Pacific Northwest, Nordics, and Northern Europe |
| 14 | Skilled data center operations staff | All operational data centers; acute at AI factory operators expanding fleets | Operations staff with both facility (BMS, EPMS, mechanical, electrical) and IT (orchestration, observability, network operations) skill sets are in structural shortage. AI training operations specifically need skills (job topology, fabric tuning, GPU fleet management) that did not exist as a discipline three years ago. Hyperscalers retain talent through wage premia; smaller operators and colos face higher turnover. Remote operations from regional NOCs partially mitigates but does not eliminate the constraint. | Labor pool development + training pipeline lag + cross-discipline skill gap (facility + IT) | Universal but acute in fast-growing markets and at sites distant from established DC labor pools (rural US sites, sovereign cloud regions, emerging markets) |
| 15 | Land with adequate power, water, and zoning | New gigawatt campus siting; AI factory and hyperscaler greenfield programs | Industrial land with the combination of available transmission capacity, water access, and supportive zoning has run out in primary clusters (Northern Virginia, Phoenix, Northern California). Operators have moved to secondary markets (Columbus, Atlanta, Reno, Dallas, Memphis, the Mid-South) and tertiary markets (rural Iowa, Nebraska, Texas Panhandle, Wyoming) chasing the next available power. Community opposition has emerged as a parallel constraint as data center water and power consumption become politically visible. AI megacampus siting now functions more like utility-scale generation siting than traditional commercial real estate. | Transmission proximity + water access + zoning approval + community acceptance + economic incentive structure | Primary clusters (NoVA, Phoenix, Bay Area) constrained; secondary clusters (Columbus, Reno, Dallas, Atlanta) tightening; tertiary expansion (Memphis, Iowa, Wyoming, rural Texas) opening |
Bottlenecks by supply chain layer
The master ranking collapses severity across layers. This view disaggregates by supply chain position, showing which layer the bottleneck lives in and the primary expansion barrier at each layer. Use this view when assessing which layer is binding for a specific program or platform.
| Layer | Primary bottleneck node | Key suppliers | Expansion barrier | Chokepoint level |
|---|---|---|---|---|
| Grid and Generation | Interconnection queue (PJM 8-year horizon); transmission upgrade timelines | PJM, ERCOT, MISO, CAISO, NYISO, ISO-NE, SPP (US RTOs) | FERC and state PUC processes; transmission siting; capacity market structure | Critical - 5+ year resolution timeline |
| Power Equipment (Utility-side) | Large power transformers (HV); substation switchgear and breakers | Hitachi Energy, Siemens Energy, GE Vernova/Prolec, ABB, Hyundai Electric | Manufacturing capacity (~$2B announced expansions); GOES steel supply; 4-5 year build-out timeline | Critical - the binding constraint on 2026-2028 deployment |
| Power Equipment (Inside-the-fence) | Distribution switchgear, busbar, PDUs, UPS systems | Eaton, Schneider Electric, ABB, Siemens, Vertiv, Mitsubishi Electric | Manufacturing capacity; tariff exposure on Chinese components; copper supply | High - 90+ week lead times |
| Onsite Generation | Diesel generators (large frames); industrial gas turbines for prime power | Caterpillar, Cummins, Kohler, MTU; GE Vernova, Siemens Energy, Solar Turbines (turbines) | Manufacturing capacity; engine supply chain; competing demand from peaker plants and oil-and-gas | High - sold out through 2027 for prime power frames |
| Cooling Infrastructure | CDUs and direct-to-chip cooling systems; cold plate manufacturing | Vertiv, Schneider/Motivair, CoolIT (KKR), Boyd, Modine; consolidation underway | Specialized manufacturing; cold plate process specialization; 2.5 MW CDU class new in 2024-2025 | Critical for AI deployments; rapidly expanding capacity |
| Compute Hardware (AI) | NVIDIA GPU allocation; HBM memory; CoWoS packaging; ABF substrate | NVIDIA (US fabless); TSMC (Taiwan); SK Hynix/Samsung/Micron (HBM); Ajinomoto (ABF) | Stacked upstream silicon and packaging dependencies; allocation by hyperscaler priority | Very high - cascading dependencies through SX bottlenecks |
| Networking | 800G/1.6T optical transceivers; AI fabric switch silicon | Coherent, Lumentum, Innolight, Accelink (optics); NVIDIA Mellanox, Broadcom (silicon) | Specialty laser components; module assembly capacity; China policy exposure | High - 2023-2025 was acute; 2026 modestly improved |
| Storage Infrastructure | High-density NAND for AI training data; nearline HDD for archival | Samsung, SK Hynix, Micron, Kioxia (NAND); Seagate, Western Digital, Toshiba (HDD) | DRAM/NAND capacity reallocated to HBM creates downstream NAND tightness | Medium-high; secondary effect of HBM reallocation |
| Construction and Trades | MEP labor; specialized trades (controls, instrumentation, BMS) | National contractors (DPR, Holder, Mortenson, Turner) and regional MEP specialists | Apprenticeship pipeline; competing demand from CHIPS Act fabs and industrial reshoring | High - structural; multi-year labor pool development |
| Land and Permits | Powered industrial land with water access and zoning | Tract, Stack Infrastructure, EdgeConneX, regional developers | Transmission proximity; water rights; zoning; community opposition | High in primary markets; tightening in secondary; opening in tertiary |
AI factory bottleneck stack
AI factories at gigawatt scale stack multiple top-ranked bottlenecks into a single supply chain dependency chain. The table below maps the bottleneck stack specific to AI training and AI-native inference deployment programs.
| Bottleneck layer | Specific constraint | Severity for AI factories |
|---|---|---|
| Grid interconnection | Multi-year wait for new generation in PJM, ERCOT, and most US ISOs; transmission upgrades may take 5-8 years | Critical - drives behind-the-meter generation strategy and tertiary-market siting |
| Large power transformers | 128-208 week lead times; gigawatt sites need multiple HV units | Critical - gates COD regardless of all other readiness |
| NVIDIA GPU allocation | Hyperscaler-priority allocation; smaller operators face 6-18 month lead times for Blackwell and Rubin | High - improving but still concentrating capacity at hyperscalers and named neo-clouds |
| HBM4 memory | SK Hynix sold out 2026; Samsung and Micron capacity expansion 2027+ | Critical for Rubin generation - 288 GB HBM4 per GPU |
| CoWoS packaging | TSMC capacity expansion ongoing but trails GPU demand from NVIDIA, AMD, and AI ASIC programs | High - separate queue from wafer starts |
| CDU and direct-to-chip cooling | Liquid cooling supply chain consolidating; CDU capacity expanding 45x but from a small base | High - GB200 and Rubin reference designs require DTC at scale |
| Onsite gas turbines (prime power) | Aeroderivative and industrial turbines sold out through 2027 | Critical for behind-the-meter strategies that bypass interconnection queues |
| 800G/1.6T optical fabric | Optical transceiver and fabric switch silicon supply | High - tens of thousands of links per training cluster |
| Construction MEP labor | Skilled trades shortage in major DC clusters compounded by adjacent CHIPS Act fab construction | High - extends commissioning timelines beyond equipment delivery |
| Operations staff | AI training cluster operations is a new discipline; talent pool not keeping pace with site count growth | Medium-high - mitigated by remote operations from regional NOCs |
Geopolitical and policy exposure
The data center supply chain inherits all the geopolitical exposure of its upstream semiconductor and energy supply chains, plus some unique exposures of its own. This table maps where chokepoints intersect with policy and geopolitical risk.
| Bottleneck | Control jurisdiction | China exposure | Western program risk |
|---|---|---|---|
| NVIDIA GPU export controls | US BIS export controls (October 2022, expanded 2023, 2024, 2025) | Restricted from H100/H200/Blackwell at full performance; H20 and successors throttled for China market | Low for Western programs; revenue impact for NVIDIA |
| Electrical equipment from China | US tariffs and Section 232 considerations | China remains world's largest producer of grid electrical equipment components | High - tariffs and supply chain restrictions extend deployment timelines |
| HBM and DRAM (Korea concentration) | No formal Western export control; market concentration risk | CXMT (China) developing HBM but years behind; not a current Western supply alternative | Medium - Korea geopolitical stability is a structural risk for Western AI programs |
| TSMC Taiwan concentration (compute) | Taiwan; TSMC Arizona and Japan partial mitigation | Taiwan reunification scenario is the catastrophic-tail risk | Critical-tail; CHIPS Act investments partially mitigate but cannot replace Taiwan in this decade |
| Optical transceivers (Chinese suppliers) | US scrutiny of Chinese components in critical infrastructure | Innolight, Accelink dominant in lower-tier transceivers; US programs increasingly require non-Chinese sourcing | Medium - dual-sourcing burden; cost premium for non-Chinese supply |
| EU AI Act and data sovereignty | European Union | Not directly applicable; EU sovereignty concerns extend to US hyperscaler dependency | Medium - drives sovereign cloud demand in EU; compliance overhead for AI workloads |
| Nuclear regulatory (NRC, FERC) | United States NRC and FERC for behind-the-meter and SMR PPAs | Not applicable | High for nuclear-backed AI strategies; regulatory pathway not yet settled for behind-the-meter coupling |
| Water and community permits | State and local US jurisdictions; EU member states | Not applicable | High and growing - reputational and regulatory exposure to data center water use |
Where substitution helps and where it does not
Substitution relieves a data center supply chain bottleneck when an alternative technology, vendor, or architectural approach can be deployed faster than the primary constraint can be expanded. Some bottlenecks have viable substitutes; others do not.
| Bottleneck | Substitution viable? | Substitution path | What substitution does not solve |
|---|---|---|---|
| Large power transformers | Partial | Solid-state transformers (Heron Power, DG Matrix, Amperesand) - emerging but not at HV scale yet; modular power distribution; spec downgrade to existing equipment classes | Solid-state transformers do not yet replace large grid-tie HV transformers; modular alternatives are tactical rather than strategic |
| Grid interconnection | Yes - increasingly the standard response | Behind-the-meter onsite generation (gas turbines, nuclear PPAs, microgrids); siting at locations with existing capacity headroom | Behind-the-meter strategies have their own constraints (turbine lead times, NRC approval, environmental review) |
| HBM memory (HBM4) | Partial - across vendors only | Multi-source qualification (NVIDIA pursuing all three vendors for Rubin); GDDR7 for inference at lower bandwidth; HBM3E for non-Rubin programs | Combined SK Hynix + Samsung + Micron capacity is the binding constraint; no fourth vendor exists at qualified HBM |
| NVIDIA GPU allocation | Partial | AMD MI-series; Intel Gaudi; hyperscaler internal silicon (Google TPU, AWS Trainium, Microsoft Maia, Meta MTIA); ASIC programs (Cerebras, Groq, SambaNova, Tenstorrent) | CUDA software ecosystem lock-in; HBM and CoWoS supply chains are shared by most alternatives |
| CoWoS packaging | Limited | Intel EMIB (for Intel programs); ASE and Amkor advanced packaging; alternative interposer architectures | TSMC CoWoS process maturity advantage; qualification timeline for alternative substrates is 12-24 months per program |
| CDUs and direct-to-chip | Limited | Rear-door heat exchangers for moderate density; immersion cooling at the highest density end; air cooling for lower-density workloads | DTC is the equilibrium for current AI accelerator density; air cannot scale to 100+ kW racks; immersion has its own qualification challenges |
| Diesel generators and gas turbines | Partial | BESS for short-duration ride-through; fuel cells (Bloom Energy, Plug Power); reciprocating gas engines for prime power at smaller scale | Long-duration backup at gigawatt scale; behind-the-meter prime power requires turbine class capacity |
| Water for cooling | Yes - actively deployed | Dry-cooled facilities; hybrid wet-dry; reclaimed water; closed-loop liquid cooling reducing makeup demand | PUE penalty for dry cooling in hot climates; reputational exposure even when withdrawal is technically permitted |
| Construction MEP labor | Limited | Modular and prefabricated construction; factory-built power skids; remote commissioning | Field installation and commissioning still require licensed trades; modular shifts but does not eliminate the constraint |
| Land in primary markets | Yes - secondary and tertiary market expansion underway | Columbus, Atlanta, Reno, Memphis, Iowa, Texas Panhandle, Wyoming; international tertiary markets | Network latency for some workloads; existing peering and carrier ecosystems; talent pool depth |
The bottleneck shift
For two decades, data center buildout was constrained primarily by IT hardware availability and capital. Servers, storage, networking, and the silicon inside them were the things that came up short. The industry's center of attention was the IT supply chain, and operations focused on how to deploy hardware once it arrived.
That equilibrium has broken. The binding constraints now sit at three layers that historically were not constraints at all: grid interconnection (an 8-year horizon in PJM), large power transformers (lead times of 4-5 years for some classes), and AI accelerator memory (sold out through 2026 at the dominant supplier). The compute hardware that gets so much industry attention is downstream of all three, gated by upstream chokepoints that AI factory operators cannot solve through procurement scale or capital deployment.
The implication for the industry is that the data center buildout is now more an electrical infrastructure problem than an IT problem. Hyperscalers spending $650 billion on AI infrastructure capex in 2026 cannot deploy that capital faster than transformers ship and grid connections clear. The companies positioned to alleviate the constraint - GE Vernova, Hitachi Energy, Siemens Energy, Eaton, Schneider Electric, Vertiv, and the SMR vendors - sit at a structurally different position in the value chain than the chip vendors and software companies that defined the previous AI infrastructure boom. Understanding the bottleneck shift is prerequisite to understanding which AI deployment programs will actually reach commercial operation on their announced timelines and which will quietly slip into the late 2020s.
Related coverage
Cross-Network: SX Bottleneck Atlas | EX Bottleneck Atlas | SX:CoWoS | SX:HBM | EX:Nuclear Energy | EX:BESS
DX Pillars: Energy | Grid-tie | Nuclear | Cooling and Thermal Management | Direct-to-Chip Cooling | Types | AI Factory | Sites | Business Models