Chips & Silicon in Data Centers


Data centers are powered by silicon at every level: CPUs, GPUs, custom AI accelerators, networking ASICs, and DPUs. This page surveys the leading-edge chips driving today’s AI factories, hyperscale clouds, and sovereign data centers.


Hyperscaler In-House Silicon

Company Chip Type / Role Foundry
Amazon / AWS Trainium2 • Inferentia2 • Graviton4 Training • Inference • Arm CPU TSMC 5nm
Google TPU v5p • Axion CPU Training ASIC • Cloud CPU TSMC 4nm / 3nm
Microsoft Maia 100 • Cobalt 100 AI accelerator • Arm CPU TSMC 5nm
Meta MTIA v2 AI training & inference accelerator TSMC 5nm
Apple M4 Ultra • A18 Pro Edge inference (PC, mobile) TSMC 3nm (N3/N3E)

GPU & Accelerator Vendors

Company Chip Type / Role Foundry
NVIDIA H100 (Hopper) • B100/B200 (Blackwell) • GB200 NVL72 Training GPUs • AI superchips TSMC 4N • TSMC 3nm (Blackwell)
AMD MI300X • MI325X AI accelerators TSMC 5nm + 6nm (CoWoS packaging)
Intel Gaudi 3 • Xeon 6 (Granite Rapids, Sierra Forest) AI accelerator • x86 CPUs Intel 3 / Intel 7 (self-fab) + TSMC packaging

Alternative Architectures

Company Chip Role Foundry
Cerebras Wafer Scale Engine 3 (WSE-3) Training accelerator (largest single chip) TSMC (7nm/5nm)
Graphcore Bow IPU Inference / training IPU TSMC 7nm
Tenstorrent Grayskull • Wormhole RISC-V CPUs + AI accelerators Samsung + TSMC
Ampere Computing AmpereOne Cloud-native Arm CPU TSMC 5nm

Networking & DPUs

Company Chip Role Foundry
Broadcom Tomahawk 5 • Jericho3 Networking ASICs (switch fabrics) TSMC 5nm
Marvell Teralynx • Octeon Networking ASICs + DPUs TSMC 5nm/7nm
NVIDIA BlueField DPUs • NVLink/NVSwitch Networking + composable infrastructure TSMC 7nm/5nm
Fungible (Microsoft) SmartNIC / DPUs Data processing + storage offload TSMC

China & Sovereign Silicon

Company Chip Role Foundry
Huawei Ascend 910B / 920 • Kirin 9000S AI training / inference • smartphones SMIC 7nm (DUV multipatterning)
Alibaba (T-Head) Yitian 710 • HanGuang Arm CPUs • AI accelerators TSMC (pre-sanctions) • now SMIC
Baidu Kunlun v2 AI inference accelerators Samsung + TSMC (mixed history)
Biren / Cambricon BR104 • Cambricon AI chips Training/inference ASICs SMIC (limited node capability)

China’s Shift: Ban on Nvidia & Domestic Push

In September 2025, China reportedly ordered domestic companies to halt purchasing Nvidia chips and cancel existing orders — a landmark policy forcing migration toward Huawei’s Ascend/Atlas line.

  • Immediate effect: Chinese hyperscalers and AI labs will phase out Nvidia H100/B200/GB200 GPUs in favor of Huawei silicon.
  • Huawei roadmap: Ascend 950/960/970 and Atlas “supernode” clusters are positioned as drop-in substitutes for frontier training and inference workloads.
  • Deployment impact: China’s future AI factories may evolve on parallel architectures, with unique interconnects, software stacks, and cluster topologies.
  • Risks: Gaps in HBM supply, advanced packaging, and CUDA-class software ecosystems could constrain performance relative to Nvidia-based systems elsewhere.

Future / Next-Gen Chips (Watchlist)

Roadmaps shift fast; treat the entries below as expected/rumored until vendor release notes and foundry disclosures confirm specifics.

Company Candidate Segment Node / Packaging (expected) Status / ETA Notes
NVIDIA “Rubin” (post-Blackwell) AI training/inference GPU TSMC N3 / advanced CoWoS 2026 (rumored) Successor to Blackwell (B200/GB200); likely higher memory BW and NVLink evolution.
AMD Instinct MI400 series AI accelerators TSMC N3/N4 + 2.5D/3D packaging 2026 (expected) Follows MI325X; focus on perf/W, memory capacity, and ecosystem maturity.
Intel Gaudi “next” / Xeon roadmap AI accel + x86 CPUs Intel 3 / 20A/18A packaging mix 2025–2026 (rolling) Gaudi line continues; Xeon 6 families iterate for AI offload/DDR5/CXL scale.
AWS Trainium3 / Inferentia3 Training & inference ASICs TSMC N4/N3 (likely) 2026 (speculative) Follows Trainium2/Inferentia2; focus on cluster-scale throughput.
Google TPU v6 / v6p Training ASIC TSMC N3 + HBM4 era 2026 (speculative) Scale-out mesh, memory BW, and perf/W uplift for frontier model training.
Microsoft Maia 200 • Cobalt next AI accel • Arm CPU TSMC N4/N3 + CoWoS 2025–2026 (expected) Second-gen Azure silicon; tighter fabric + memory advances.
Meta MTIA v3 AI accelerator TSMC N4/N3 2026 (rumored) Focus on inference density and Meta-specific operators.
Apple M5 / A19 family Edge inference SoCs TSMC N3E ? N2 (later) 2025–2026 On-device inference uplift; ties to hybrid cloud/edge strategies.
Huawei Ascend 930 / next AI accelerators SMIC 7nm-class (DUV) • advanced packaging 2025–2026 (domestic) Sanctions-bounded cadence; performance rising within node limits.
Cerebras WSE-Next Wafer-scale training TSMC N5?N3 (possible) 2026+ (speculative) Larger on-wafer SRAM and yield tricks for larger context models.
Broadcom Tomahawk 6 • Jericho3-AI (evol.) Switching ASICs TSMC N3/N4 2025–2026 AI fabric scale to 100k+ GPU pods; optical-friendly roadmaps.
Marvell Teralynx next • Octeon next Switch + DPU TSMC N5/N3 2025–2026 AI spine/leaf throughput, DPUs for storage/IO offload.
NVIDIA BlueField-4 • NVLink-next DPU + interconnect TSMC N5/N4 2025–2026 Higher line-rate security/IO offload; tighter GPU coherency fabrics.
Ampere AmpereOne next Cloud Arm CPU TSMC N3/N4 2026 Perf/W gains for stateless cloud workloads; CXL features.
Tenstorrent Next-gen RISC-V + AI CPU/accelerator Samsung/TSMC (TBD) 2026+ Modular tiles; licensing model to third parties.

Key Takeaways

  • NVIDIA remains the dominant training silicon provider (Blackwell generation).
  • AMD and Intel are challengers, gaining traction with cost, openness, or x86 incumbency.
  • Hyperscalers (AWS, Google, Microsoft, Meta) are actively reducing reliance on NVIDIA with custom chips.
  • China’s ecosystem is forced onto SMIC (7nm-class), limiting competitiveness but sustaining sovereignty.
  • Networking & DPUs (Broadcom, Marvell, NVIDIA) are as critical as compute — they define AI cluster coherence.
  • Alternative architectures (Cerebras, Graphcore, Tenstorrent) push boundaries but face adoption hurdles.