DataCentersX > Stack > Cooling and Thermal Management > HVAC and Air Handling
HVAC & Air Handling
HVAC and air handling is the cooling modality that uses air as the heat-transport fluid between servers and the mechanical plant. It is the legacy default of the data center industry and remains the correct engineering choice for every load below the density threshold where liquid becomes economical. In modern AI facilities, air handling continues to cool the majority of rack floor space that runs networking gear, storage, management servers, and general-purpose compute, even as the high-density accelerator rows transition to direct-to-chip and immersion.
The engineering problem of air cooling is set by physics. Air carries roughly 1.005 kilojoules per kilogram per degree Kelvin — about one-four-thousandth the volumetric heat capacity of water. Removing a given thermal load therefore requires moving a large mass of air, which means large volumetric flow, which means large fan power and large floor and ceiling volume given over to air paths. These constants set a hard economic ceiling around 30 kilowatts per rack beyond which air cooling loses to liquid on total cost of ownership.
Equipment classes
| Equipment | Cooling Source | Placement | Typical Use |
|---|---|---|---|
| CRAC (Computer Room Air Conditioner) | Direct expansion refrigerant with internal compressor | Perimeter of data hall, room level | Smaller enterprise halls, edge sites, legacy facilities |
| CRAH (Computer Room Air Handler) | Chilled water from central plant | Perimeter of data hall, room level | Hyperscale and larger colocation halls |
| AHU (Air Handling Unit) | Chilled water or economizer air | Rooftop, penthouse, or mechanical room | Large hyperscale halls with centralized air supply |
| In-row cooler | Chilled water or DX | Within the row, between racks | Higher-density rows, 15 to 30 kW |
| Rear-door heat exchanger | Chilled or facility water | Back of individual rack | Retrofit density boost; bridges air-cooled and liquid infrastructure |
| Overhead cooler | Chilled water | Above the rack, typically over cold aisle | Halls without raised floors, modular deployments |
The primary axis distinguishing CRAC from CRAH is the cooling source: CRACs contain their own compressors and reject heat through a condenser loop, while CRAHs are coils fed from a central chilled-water plant. CRAHs dominate at hyperscale because centralizing chillers is more efficient than running many small compressors; CRACs persist in smaller halls and edge deployments where a central plant is not justified.
Airflow architectures
Equipment placement is only half the engineering problem. The other half is the architecture by which cold supply air reaches server intakes and hot return air reaches the cooling equipment without mixing. Four patterns dominate.
Raised floor with plenum supply. The historical default: a raised floor creates a pressurized plenum for cold air, which enters the hall through perforated tiles in the cold aisle. Hot exhaust returns overhead to perimeter CRACs or CRAHs. Effective at moderate densities; struggles above 15 kilowatts per rack without containment.
Overhead ducted supply. Cold air is delivered through overhead ducts directly into the cold aisle, eliminating the raised floor. Common in modular and prefabricated halls and in retrofits of buildings without structural capacity for raised floors.
Hot aisle / cold aisle containment. Physical barriers (doors, curtains, roof panels) segregate hot and cold air streams at the aisle level. Hot aisle containment isolates the return path; cold aisle containment isolates the supply path. Either approach prevents recirculation and allows supply temperatures to rise toward ASHRAE A1 upper limits without risking server inlet hotspots. Containment is effectively mandatory at densities above 10 kilowatts per rack and is standard in every new hyperscale build.
Chimney and return plenum. Rack-mounted chimneys duct exhaust directly into a ceiling return plenum, combining the benefits of containment with a cleaner return path. Common in high-density enterprise and HPC halls.
ASHRAE thermal classes
ASHRAE Technical Committee 9.9 publishes the industry's reference thermal guidelines for data center equipment. The classes define recommended and allowable ranges for server inlet temperature and humidity, and they are the coordination point between IT equipment manufacturers and mechanical engineers.
| ASHRAE Class | Recommended Inlet Range | Allowable Inlet Range | Typical Use |
|---|---|---|---|
| A1 | 18 to 27 degrees C | 15 to 32 degrees C | Enterprise and high-reliability halls |
| A2 | 18 to 27 degrees C | 10 to 35 degrees C | General-purpose IT halls |
| A3 | 18 to 27 degrees C | 5 to 40 degrees C | Hyperscale with wide economization window |
| A4 | 18 to 27 degrees C | 5 to 45 degrees C | Extreme economization; limited hardware support |
| H1 | 18 to 22 degrees C | Narrow | High-density air-cooled halls approaching liquid threshold |
The economic significance of the classes is that every degree of allowable inlet temperature above the recommended range extends the hours per year a facility can run on outside-air economization rather than mechanical chilling, which collapses PUE. The industry trend has been toward operating closer to the allowable upper bound, supported by IT equipment certified to A3 or A4.
Economization
Economization uses outside ambient conditions to reduce or eliminate mechanical refrigeration hours. Two modes dominate.
Airside economization draws outside air directly into the data hall (or mixes it with return air) through filtration and humidity control. Effective in dry cool climates; constrained by local air quality and by humidity limits on IT equipment. When particulate or corrosive gas levels exceed hardware tolerances, airside economization is not viable and operators revert to closed-loop systems.
Waterside economization uses the outside air to cool the chilled-water loop through a cooling tower or dry cooler, bypassing the chiller compressors whenever ambient wet-bulb or dry-bulb temperature is low enough. The data hall remains a closed airflow system; only the heat rejection side interacts with outside air. This avoids air quality constraints at the cost of a higher temperature approach.
Economization hours per year are a primary driver of site selection for hyperscale builds. Nordic and Pacific Northwest sites achieve near-year-round economization; desert and tropical sites require far more mechanical chilling hours.
The density ceiling
Air cooling has a hard upper bound set by physics and a softer upper bound set by economics. The physics bound comes from the volumetric flow required to remove a given thermal load at a given delta-T: doubling rack density doubles required flow, which roughly quadruples fan power at the aisle level. The economics bound comes from the floor area given over to air-handling equipment and the containment infrastructure required to manage ever-higher delta-Ts without recirculation.
In practice, well-contained air-cooled halls can reach 25 to 30 kilowatts per rack with acceptable fan power and supply temperature margins. Beyond that threshold, rear-door heat exchangers extend the envelope to roughly 40 to 50 kilowatts by transferring heat from exhaust air directly into a facility water loop. Above 50 kilowatts, air-only solutions are no longer economic regardless of architecture, and the rack must move to direct-to-chip or immersion.
This density ceiling is the structural reason air handling does not disappear even in AI facilities: the accelerator rows move to liquid, but every other rack in the building (networking, storage, management, general compute) remains well below the ceiling and stays on air.
Related coverage
Cooling and Thermal Management | Liquid Cooling | Direct-to-Chip Cooling | Cooling Tower and Heat Rejection | Facility Layer | Cooling Monitoring | Resource Usage (PUE / WUE)