Chiller systems account for 40–60% of HVAC energy consumption in large buildings, making them the biggest leverage point for energy savings. Traditional operating strategies — fixed start/stop sequences and fixed temperature setpoints — are far from optimal. When outdoor conditions, building loads, and electricity rates fluctuate over time, the optimal operating strategy should adjust dynamically as well. As the third installment in this series, this article explores how AI enables chiller plants to achieve true real-time optimization.

AI x HVAC Series
  1. Data Foundations: From Sensors to Machine Learning Models
  2. Fault Detection and Predictive Maintenance
  3. Chiller Plant Optimization: From MPC to Deep Reinforcement Learning (This Article)
  4. Future Vision: Digital Twins, Generative AI, and Edge Intelligence

1. Defining the Optimization Problem

Chiller plant operational optimization can be mathematically formulated as a constrained nonlinear optimization problem. The objective function minimizes total system energy consumption (or total operating cost), with decision variables including:

  • Chiller Start/Stop Combinations: Which chillers should be running at any given moment
  • Chilled Water Supply Temperature Setpoint: Lowering supply temperature increases chiller energy consumption but enhances cooling capacity
  • Condenser Water Temperature Setpoint: Lower condenser water temperature improves chiller efficiency but increases cooling tower energy consumption
  • Pump and Fan Frequencies: Optimal variable-frequency operation varies with load

Constraints include space temperature and humidity requirements, equipment operating limits, start/stop frequency restrictions, and electrical capacity limits. The challenge lies in the system's strong nonlinearity, time-varying characteristics, coupling effects between subsystems, and time delays introduced by building thermal inertia.

2. Traditional Optimization Methods: A Review

Braun's (1990) doctoral research at Purdue University[1] laid the theoretical foundation for chiller plant optimization. He developed steady-state models of chillers, cooling towers, and pumps, and demonstrated that for a given load condition, an optimal combination of condenser water temperature and pump flow rate exists. This work has inspired a large body of subsequent research and commercial products.

The main limitations of traditional methods include:

  • Steady-State Assumption: Most optimization is solved under steady-state conditions, ignoring system dynamic response and building thermal inertia
  • Model Uncertainty: Physical model parameters (such as chiller performance curves) require periodic calibration and cannot accurately reflect equipment degradation
  • Computational Speed: Traditional nonlinear programming requires iterative solving, which may not meet the response speed requirements of real-time control

Section 5.18 of ASHRAE Guideline 36[2] provides high-performance control sequences for chiller systems, including load-based start/stop logic and condenser water temperature reset strategies. These rule-based methods can achieve 10–20% energy savings in practice, but a gap remains between them and the theoretical optimum.

3. Model Predictive Control (MPC): Forward-Looking Optimization

The core idea of Model Predictive Control (MPC) is to use a predictive model of the system to solve for the optimal control sequence within a rolling time horizon. For chiller systems, MPC can incorporate weather forecasts, building load predictions, and electricity rate information to plan operating strategies hours in advance.

The Three Pillars of MPC

  • Predictive Model: Building thermodynamic models (simplified RC circuit models or data-driven ML models) predict future load changes
  • Objective Function: Minimize total energy consumption or total cost within the prediction horizon (can incorporate time-of-use electricity pricing)
  • Rolling Optimization: Re-solve at each control step, using the latest measurement data to correct prediction deviations

Professor Henze's team at the University of Colorado Boulder demonstrated MPC application in ice storage systems in their 2004 research[3], achieving significant electricity cost savings by predicting next-day cooling demand and electricity rate structures to optimize ice charging and discharging schedules. Drgona et al. (2020) systematically reviewed MPC progress in building energy systems[4], noting that data-driven MPC is overcoming the modeling cost bottleneck of traditional physics-based MPC.

Practical Challenges of MPC

Although MPC has demonstrated 15–30% energy saving potential in academic research, its deployment in actual engineering projects still faces challenges: prediction model development and maintenance costs, solver real-time performance requirements, integration complexity with existing BMS, and engineering teams' trust in "black box" controllers.

4. Deep Reinforcement Learning: From Simulation to Real-World Deployment

Deep Reinforcement Learning (DRL) brings an entirely new paradigm to HVAC optimization: without needing to build a system model in advance, an AI agent learns the optimal control policy autonomously through repeated interaction (trial and error) with the environment.

Common DRL Algorithms

  • DQN (Deep Q-Network): Suitable for discrete action spaces (e.g., chiller start/stop decisions); Wei et al. (2017) were among the first to apply DQN to HVAC control[5]
  • DDPG (Deep Deterministic Policy Gradient): Suitable for continuous action spaces (e.g., continuous adjustment of temperature setpoints)
  • SAC (Soft Actor-Critic): Offers better stability and exploration efficiency in continuous control tasks

CityLearn: A Standardized Training and Evaluation Environment

The CityLearn platform[6], developed by Professor Nagy's team at UT Austin, provides a standardized simulation environment for building energy DRL research. CityLearn simulates building cluster energy dynamics, including HVAC, energy storage, and grid interactions, allowing researchers to safely train and compare different DRL algorithms — without conducting exploratory control on real buildings that could cause discomfort or equipment damage.

Core Challenges of DRL

Deploying DRL from simulation environments to real chiller systems faces the "Sim-to-Real" gap:

  • Safety Guarantees: DRL exploration behavior may produce unsafe control actions (e.g., causing space temperatures to spike or excessive equipment cycling)
  • Training Data Requirements: DRL typically requires millions of interaction steps to converge, which is impractical on real systems
  • Non-Stationary Environment: Building usage patterns and outdoor conditions continuously change, requiring the DRL agent to adapt continuously

5. Physics-Informed Machine Learning: Fusing Knowledge and Data

Physics-Informed Machine Learning (PIML) seeks to find a balance between purely data-driven and purely physics-based models by embedding known physical laws (such as energy conservation and the first/second laws of thermodynamics) as constraints within ML models.

For chiller systems, PIML applications include: incorporating energy balance constraints in neural network loss functions, using physical models for physics-guided data augmentation, and grey-box models that combine physical equations with ML modules. The advantage of PIML is achieving good prediction accuracy with less data, and model outputs are less likely to violate physical principles.

6. Practical Considerations for Chiller Plants in Taiwan

Applying AI optimization techniques to chiller plants in Taiwan requires consideration of the following localization factors:

  • High Temperature and Humidity Operating Conditions: Taiwan's high average wet-bulb temperature limits cooling tower efficiency, and chillers operate at high condensing temperatures for extended periods. The optimization space differs from that of temperate regions
  • Electricity Rate Structure: Taipower's peak/semi-peak/off-peak rate differentials provide economic incentives for load shifting; MPC can optimize ice storage and load shedding schedules accordingly
  • Equipment Fleet Diversity: Buildings in Taiwan commonly have chiller configurations comprising units of different vintages, manufacturers, and capacities. AI models must handle coordinated optimization of heterogeneous equipment fleets
  • Strict Indoor Temperature and Humidity Requirements: Building owners in Taiwan are sensitive to indoor temperature. AI controllers must achieve a precise balance between energy savings and comfort — energy savings cannot come at the expense of occupant experience

Chapter 43 of the ASHRAE HVAC Applications Handbook[7] provides comprehensive technical guidance on building operational optimization, emphasizing that optimization strategies must be seamlessly integrated with actual building operations and maintenance processes to achieve sustained energy savings.

Conclusion

From Braun's steady-state optimization to MPC's forward-looking control, and then to DRL's autonomous learning — AI optimization for chiller plants is progressing along a clear technological path. But technological advances should not cause engineers to forget the most fundamental principles: the prerequisites for optimization are correct data (first article in this series) and healthy equipment (second article in this series). In the final installment of this series, we will look ahead to the future of AI HVAC — how digital twins, generative AI, and edge intelligence will further reshape the role of HVAC engineers.