As AI computing demand surges exponentially, single rack power density has skyrocketed from the traditional 5-10 kW to 40-100 kW or higher, and data center cooling system design faces unprecedented challenges. In this race for high-density heat dissipation, the Thermal Guidelines for Data Processing Environments published by ASHRAE Technical Committee 9.9 (Mission Critical Facilities, Data Centers, Technology Spaces and Electronic Equipment) has become the authoritative design reference for global data center environmental control[1]. This article systematically interprets the core content of the ASHRAE TC 9.9 Thermal Guidelines from an HVAC engineering perspective, helping engineers precisely master the temperature and humidity parameters and cooling strategies for data center environmental design.

1. What is ASHRAE TC 9.9?

ASHRAE TC 9.9 is a technical committee under the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), with the full name "Mission Critical Facilities, Data Centers, Technology Spaces and Electronic Equipment," dedicated to researching and developing environmental control standards for data centers and mission-critical facilities. Committee members include IT equipment manufacturers (such as Dell, HP, IBM, Intel), cooling system suppliers, and data center operators, and their published technical white papers reflect industry consensus[2].

TC 9.9's most significant publication is the Thermal Guidelines for Data Processing Environments, which has undergone multiple revisions since its first edition in 2004, with the latest being the fifth edition (2021). This guideline is not a mandatory regulation but rather defines recommended temperature and humidity operating ranges for different classes of data center environments based on IT equipment manufacturers' empirical test data, providing clear engineering criteria for HVAC system design[1].

2. Environmental Class Classification: A1 to A4

ASHRAE TC 9.9 classifies data center environments into four main classes (A1-A4) and two relaxed classes (B, C) based on IT equipment temperature and humidity tolerance[3]:

Class Recommended Temp. Range Allowable Temp. Range Recommended RH Max Dew Point Typical Application
A1 18-27°C 15-32°C ≤60% RH and DP ≤15°C 17°C Enterprise servers, storage
A2 18-27°C 10-35°C ≤60% RH and DP ≤15°C 21°C High-end commercial servers
A3 18-27°C 5-40°C -12°C DP to 24°C DP 24°C Volume deployment equipment
A4 18-27°C 5-45°C -12°C DP to 24°C DP 24°C Custom-designed servers

The "Recommended" range for each class represents optimal environmental conditions for long-term equipment operation, while the "Allowable" range represents extended conditions under which equipment can still operate normally but may affect reliability or service life. HVAC system design should target the recommended range, with the allowable range serving as a short-term tolerance boundary during abnormal conditions (such as during primary chiller failover)[3].

The Evolutionary Significance from A1 to A4

The core trend across TC 9.9 revisions has been the progressive expansion of allowable temperature upper limits. The first edition (2004) had only a single class with a recommended temperature ceiling of 25°C; by the fifth edition (2021), the A4 class allowable temperature ceiling reaches 45°C. This expansion reflects two important industry trends: first, IT equipment manufacturers continue to improve hardware thermal tolerance; second, data center operators pursue "free cooling" energy-saving strategies -- raising the inlet temperature ceiling means more outdoor air conditions allow direct use of outdoor air or evaporative cooling, reducing compressor run time[4].

3. Temperature Control: Server Inlet Temperature is Key

A core concept of the TC 9.9 Thermal Guidelines is that the environmental control measurement point should be the "IT equipment inlet" (Server Inlet), not the traditional "room average temperature" or "return air temperature." This means HVAC system design must ensure cold airflow effectively reaches the intake face of every server, not merely maintaining a set temperature at a single point in the data hall[5].

This seemingly simple definition has profound implications for HVAC design:

  • Hot/Cold Aisle Containment: Traditional data halls often suffer from hot and cold air mixing (Bypass Airflow and Recirculation) causing localized hot spots. Cold Aisle Containment (CAC) or Hot Aisle Containment (HAC) is the primary means of ensuring uniform inlet temperatures
  • Airflow Management: Floor tile locations and airflow distribution in raised-floor systems, as well as supply air direction of in-row cooling units, must all be optimized with "inlet temperature" as the control target
  • CFD Simulation: High-density data halls should use Computational Fluid Dynamics (CFD) simulation to verify airflow distribution and identify potential hot spot areas

4. Humidity and Dew Point Control

A major change since the fourth edition (2015) of TC 9.9 was the shift of the primary humidity control metric from "relative humidity" to "dew point temperature"[6]. The recommended range humidity lower limit was changed from the original 40% RH to a combined condition of -9°C DP dew point temperature and 8% relative humidity. The engineering implications of this adjustment include:

  • Reduced Humidification Energy: Traditional relative humidity-based control often requires significant humidification to maintain above 40% RH during winter or in dry regions, consuming considerable steam or electricity. With dew point as the metric, humidification needs can be reduced or eliminated under most climate conditions
  • Electrostatic Discharge (ESD) Prevention: The risk of excessively low humidity lies in electrostatic discharge potentially damaging IT equipment. TC 9.9 considers a dew point of -9°C DP (approximately equivalent to 20% RH at 25°C) sufficient to control ESD risk
  • Condensation Prevention: The dew point upper limit (17°C DP for A1, 21°C DP for A2) ensures that chilled water piping and cold aisle surfaces do not develop condensation

5. High-Density Cooling and Liquid Cooling Technology

As AI/ML workload-driven GPU rack power breaks through 100 kW, traditional air cooling approaches its physical limits. ASHRAE TC 9.9 published the Liquid Cooling Guidelines for Datacom Equipment Centers in 2014, with subsequent version updates providing an engineering design framework for liquid cooling technology[7].

TC 9.9 defines supply water temperature classes for liquid cooling systems:

Class Supply Water Temp. Cooling Method Energy Saving Potential
W1 2-17°C Chiller-supplied cooling Low (requires compressor)
W2 2-27°C Chiller + partial free cooling Medium
W3 2-40°C Cooling tower direct supply High
W4 2-45°C Dry cooler Highest (year-round compressor-free)

The high-temperature liquid cooling options of W3 and W4 enable cooling systems to rely entirely on cooling towers or dry coolers for heat rejection year-round, completely eliminating compressor operation and reducing the cooling system's PUE contribution to near zero. For Taiwan, the W3 class can achieve complete free cooling during autumn and winter, while summer still requires supplemental chilled water cooling[8].

6. Implications for Taiwan Data Center HVAC Design

Taiwan's subtropical location, with summer outdoor temperatures reaching 35°C and relative humidity exceeding 70%, presents unique challenges for data center HVAC design:

  • Limited Free Cooling Hours: Based on Taipei weather data, if using the A1 class (inlet ceiling 27°C) with required supply cooling temperatures below 18°C, approximately 85% of annual hours require mechanical cooling, with free cooling available only briefly during winter. Upgrading to the A2 class can increase free cooling hours to approximately 25-30%
  • Dew Point Control Over Relative Humidity: In Taiwan's high-humidity environment, adopting TC 9.9's dew point control strategy can significantly reduce dehumidification energy consumption, though extreme humidity conditions during plum rain and typhoon seasons still require attention
  • Liquid Cooling as an Inevitable Trend: Facing AI computing demands, Taiwan's Tier III/IV data centers are accelerating adoption of rear-door heat exchangers and direct-to-chip liquid cooling solutions

Conclusion

The ASHRAE TC 9.9 Thermal Guidelines is not merely a temperature and humidity specification sheet but embodies the data center industry's balancing philosophy among energy efficiency, reliability, and sustainability. For HVAC engineers, deeply understanding TC 9.9's design logic -- centering on inlet temperature, replacing relative humidity with dew point, and addressing high-density challenges with liquid cooling -- is fundamental to designing next-generation data center cooling systems. As AI-era computing demands continue to escalate, this standard will continue to evolve and deserves close attention from engineering practitioners.