DataCentersX > Stack
Data Center Stack
The Stack pillar covers the engineering and architecture of the physical systems that make a data center work. It is the how-it-is-built pillar, distinct from Facility Operations (how it is run day-to-day) and from Workloads (what runs inside). The stack has a natural hierarchy that builds upward from silicon to campus, and a set of cross-cutting infrastructure disciplines (cooling, power, networking, orchestration) that touch every physical layer. The ten children below group into those two views, and each can be read either as a step up the hierarchy or as one of the disciplines that binds the hierarchy together.
Accelerator density, fabric bandwidth, and gigawatt-class power delivery have reshaped every layer of the stack over the past several years. Silicon has moved to 3nm and below with HBM stacking and chiplet packaging. Servers have become accelerator-dominant boards with liquid-cooled cold plates as the standard configuration. Racks have consolidated into 100 to 250 kW designs. Clusters have reached the tens-of-thousands-of-accelerator scale with topology-aware schedulers that did not exist a few years ago. Facilities and campuses have crossed power thresholds that restructure how they interact with the grid and the surrounding community. Each layer child below covers one slice of this reshaping, and each discipline child covers how one infrastructure system threads through all of it.
The physical stack progression
Six children organize the stack as a physical hierarchy, from the smallest functional unit (silicon) to the largest (campus). Each layer composes the layer below it into a larger operational unit with its own engineering concerns.
| Layer | Scope | Defining Engineering Concerns |
|---|---|---|
| Chips and Silicon | Accelerators, CPUs, ASICs, HBM, packaging, process nodes | Process technology, die size, HBM capacity and bandwidth, TDP, chiplet architecture |
| Server Layer | Compute boards, memory, local storage, NICs, cold plates, chassis form factor | Accelerator count per board, thermal envelope, interconnect, power delivery, serviceability |
| Rack Layer | Rack frame, top-of-rack switching, PDUs, manifolds, rear-door exchangers, cabling | Rack power, cooling modality, TOR topology, cable management, serviceability |
| Cluster Layer | Multi-rack pods, cluster fabric, shared storage, CDUs serving multiple racks | Topology-aware scheduling, interconnect bandwidth, parallel file systems, cluster redundancy |
| Facility Layer | Data halls, mechanical plants, electrical distribution, facility networking, life safety | Building capacity, facility water loops, grid interconnection, redundancy topology (N+1, 2N) |
| Campus Layer | Multi-building sites, onsite substations, district cooling, campus energy infrastructure | Campus-scale grid interconnection, multi-building resilience, shared utilities, onsite generation |
The cross-cutting disciplines
Four children cover infrastructure disciplines that do not belong to any single layer but instead thread through all of them. A cold plate, a busbar, a network link, and an orchestration agent each appears at multiple layers and has to be engineered coherently across them. Treating these as cross-cutting disciplines rather than forcing them into layer-specific treatment keeps the engineering intent visible.
| Discipline | Scope | Where It Appears |
|---|---|---|
| Cooling and Thermal Management | HVAC, liquid cooling, direct-to-chip, immersion, heat rejection, water systems | Every layer from chip (cold plates) to campus (district cooling) |
| Power Distribution | Grid-tie, switchgear, UPS, PDUs, busbars, VRMs, DC distribution | Every layer from silicon (on-die power delivery) to campus (substations) |
| Networking and Fabrics | Server NICs, TOR switches, cluster fabric, facility and campus networking, external peering | Every layer from server NIC to campus peering edge |
| Orchestration and Digital Twin | Workload orchestration, facility digital twins, simulation models, control systems | Spans compute layers (orchestration) and facility layers (digital twin) |
How the two views relate
The physical progression answers "what is the next unit up?" A server is a composition of chips; a rack is a composition of servers; a cluster is a composition of racks; and so on up to campus. Each layer introduces new engineering concerns that did not exist at the layer below: the server introduces thermal design that does not exist at the chip level, the rack introduces power distribution that does not exist at the server level, the cluster introduces topology-aware scheduling that does not exist at the rack level, and the facility and campus introduce external interfaces (grid, water, network peering, community relations) that do not exist inside any individual compute unit.
The cross-cutting disciplines answer "how does this infrastructure system work across the hierarchy?" A cooling system has elements at every layer, and the engineering only closes if they are designed together: a cold plate that works in isolation but exceeds facility CDU capacity is a broken system. The same is true for power (server PSU choices depend on rack PDU choices depend on facility switchgear choices depend on grid interconnection), networking (server NIC choices depend on TOR topology depend on cluster fabric depend on facility and campus peering), and orchestration (workload scheduling cannot be separated from facility thermal and power monitoring).
Reading the stack as both a hierarchy of physical layers and a set of cross-cutting disciplines is how working engineers think about data center architecture. Both views are necessary; neither is sufficient alone.
Where Stack sits in the DatacentersX structure
The Stack pillar covers engineering and architecture. The Facility Operations pillar covers operations and monitoring of the same physical systems at runtime. The Compute Operations pillar covers the operations of workloads running on top of the stack. The Types pillar covers the kinds of facilities that instantiate the stack in different configurations. The Energy pillar covers the power-generation and sustainability side of the infrastructure that Stack:Power Distribution delivers. Each pillar answers a different question about the same physical plant, and the stack child pages cross-reference outward to each of them.
Related coverage
Chips and Silicon | Server Layer | Rack Layer | Cluster Layer | Facility Layer | Campus Layer | Cooling and Thermal Management | Power Distribution | Networking and Fabrics | Orchestration and Digital Twin | Types | Facility Operations