The explosive growth of artificial intelligence is fundamentally reshaping the landscape of data centers. While a traditional rack's heat dissipation density is 5-10 kW, AI computing racks equipped with NVIDIA H100 or B200 GPUs can easily reach 40-100 kW or higher[1]. This means the HVAC system must remove 5 to 10 times more heat within the same data center floor area. Traditional data center HVAC design methodologies can no longer cope -- AI-era data center HVAC requires an entirely new mindset.

1. The Heat Density Revolution in AI Data Centers

The evolution of data center heat dissipation density clearly reflects the progression of computing technology:

  • 2000s: 2-5 kW per rack, primarily CPU servers, traditional raised floor underfloor air delivery was sufficient
  • 2010s: 5-15 kW per rack, virtualization and cloud computing drove density increases, hot/cold aisle containment became standard
  • 2020s: 20-100+ kW per rack, GPU/TPU accelerators driving AI training, traditional air cooling facing physical limits[2]

Take the NVIDIA DGX B200 system as an example -- a single rack can consume up to 120 kW[3]. If cooled using traditional underfloor air delivery, the required airflow would push sub-floor plenum velocities beyond reasonable ranges, and temperature gradients within the rack would be nearly impossible to control. This is precisely the inflection point where liquid cooling technology transitions from "optional" to "essential."

2. Air Cooling Systems: Limits and Optimization

Air cooling remains the primary cooling method for most data centers, with cost and maintenance advantages in low-to-medium density scenarios. However, to address ever-increasing heat densities, air cooling system design must become more refined[4]:

Aisle Containment

Hot/cold aisle containment is the most cost-effective measure for improving air cooling efficiency. By physically separating cold supply air from exhausted hot air to prevent mixing (Bypass Air and Recirculation), cooling efficiency can be improved by 20-30%[5]. Hot Aisle Containment paired with precision cooling unit return air is more widely adopted in practice than Cold Aisle Containment, as it allows the remaining data center space to maintain a comfortable ambient temperature.

In-Row Cooling

Installing precision cooling units directly between rack rows dramatically shortens the air delivery path and reduces ductwork losses. For medium-to-high density areas of 15-30 kW/rack, in-row cooling represents the upper limit of air cooling solutions. However, when density exceeds 30 kW/rack, even in-row cooling begins to face airflow volume and noise bottlenecks.

Raising Supply Air Temperature

The ASHRAE TC 9.9 Data Processing Environments guidelines[6] recommend an allowable inlet air temperature range of 15-32 degrees C for Class A1 equipment. Raising supply air temperature from the traditional 13-15 degrees C to 20-27 degrees C can significantly increase Free Cooling available hours and reduce chiller operating load. Google and other hyperscale operators have demonstrated that 27 degrees C supply air temperature is entirely feasible with proper humidity control[7].

3. Liquid Cooling Technology: The New Standard for AI Data Centers

Liquid Cooling leverages the far superior specific heat capacity and thermal conductivity of liquids compared to air, removing more heat in a smaller volume. Water's volumetric heat capacity is approximately 3,400 times that of air -- this is the fundamental reason liquid cooling technology can break through air cooling's physical limits[8].

Direct-to-Chip / Cold Plate Cooling

Cold plates are mounted directly on GPU/CPU chip surfaces, with circulating coolant carrying away heat. This is currently the most mainstream liquid cooling solution for AI data centers. NVIDIA's GB200 NVL72 rack adopts direct liquid cooling design, with coolant temperatures of approximately 25-45 degrees C[9].

The advantage of direct liquid cooling is precision -- heat is removed at the point of generation without needing air as an inefficient heat transfer medium. However, it also introduces new engineering challenges: liquid leak risks inside IT equipment, planning of Coolant Distribution Units (CDUs), piping material compatibility, and construction quality control.

Immersion Cooling

Submerging entire server motherboards in non-conductive dielectric coolant represents the ultimate heat dissipation density solution. Single-phase immersion cooling uses dielectric fluids such as 3M Novec or Shell Immersion Fluid[10], capable of handling over 100 kW per rack.

Immersion cooling virtually eliminates the need for fans, resulting in extremely low noise and dramatically reduced energy consumption. However, adoption barriers are higher: IT equipment warranty conditions, changed maintenance procedures, coolant cost and environmental impact, and structural load considerations (dielectric fluid density is approximately 1.2-1.8 kg/L) all require careful evaluation.

4. PUE Optimization: From Metric to Practice

PUE (Power Usage Effectiveness) is the core metric for measuring data center energy efficiency, defined as total data center power consumption divided by IT equipment power consumption[11]:

PUE = Total Facility Power / IT Equipment Power

The ideal PUE is 1.0 (all power used for computing), but in practice, HVAC systems, UPS losses, lighting, etc. consume additional power. The global average data center PUE is approximately 1.55-1.60, while top-tier hyperscale data centers achieve below 1.10[12].

HVAC-related factors affecting PUE include:

  • Free Cooling ratio: Using outdoor air or cooling towers for direct cooling during low-temperature seasons, reducing chiller operation
  • Chiller efficiency (kW/RT): High-efficiency magnetic bearing or centrifugal chillers can achieve COP of 8-10
  • Supply air temperature strategy: Raising supply temperature increases free cooling hours
  • Airflow management quality: Reducing bypass and recirculation mixing

However, AI data centers' high power density may actually benefit PUE optimization. Liquid cooling systems' higher coolant temperatures (30-45 degrees C) make Waste Heat Recovery more valuable -- usable for building heating, agricultural greenhouses, or industrial processes, achieving a true circular economy[13].

5. Local Challenges for AI Data Centers in Taiwan

As a core global production base for semiconductors and AI hardware, Taiwan is experiencing rapidly growing demand for AI computing data center construction. However, Taiwan's geographic and climatic conditions present unique design challenges:

  • High temperature and humidity: Year-round high wet-bulb temperatures limit cooling tower and free cooling efficiency
  • Power supply: Individual data centers demanding tens of megawatts challenge grid capacity and backup power systems
  • Earthquakes and typhoons: Seismic design for liquid cooling piping and typhoon protection for cooling towers require special consideration
  • Water usage restrictions: Water consumption of evaporative cooling systems may face water resource limitations in some areas

Conclusion

AI data center HVAC design is shifting from traditional "heat removal" thinking to a systems engineering approach of "thermal management." High-density cooling is no longer just about upgrading HVAC equipment -- it encompasses building design, power distribution, cooling technology selection, control strategies, and sustainable operations. As AI model parameter counts continue to double and computing demands continue to surge, data center HVAC engineers must stay at the forefront of technology, responding to this cooling revolution with innovative solutions.