Posted in

基于模拟器的强化学习用于数据中心冷却优化_AI阅读总结 — 包阅AI

包阅导读总结

1.

关键词:数据中心、冷却优化、强化学习、节能、模拟

2.

总结:Meta 利用基于模拟器的强化学习优化数据中心冷却,降低了能源和水的消耗,新的数据中心设计也将适用此方法,还介绍了冷却工作原理、强化学习方法及试点成果。

3.

主要内容:

– 数据中心冷却优化的重要性

– 效率是可持续数据中心设计的关键,冷却消耗大量能源和水,提高冷却效率有助于应对气候变化。

– 数据中心冷却工作原理

– 采用两层阁楼设计,利用 100% 室外空气冷却,空气经多个环节调节温度和湿度。

– 依靠建筑管理系统监控和控制机械系统组件,有三个主要控制回路调整送风设定点。

– 数据中心冷却的强化学习方法

– 强化学习擅长建模控制系统,数据中心冷却控制可在该范式下建模。

– 采用离线模拟器的强化学习方法,从真实历史观察开始在模拟环境中操作。

– 模拟器是基于物理的建筑能源使用模型,能输出动态系统响应。

– 强化学习方法的成果

– 2021 年在一个数据中心区域开展试点,新的送风设定点波动由温度和服务器负载周期决定。

– 数据中心温度条件未超出规格,试点平均降低了送风风扇能耗 20%和用水 4%。

思维导图:

文章地址:https://engineering.fb.com/2024/09/10/data-center-engineering/simulator-based-reinforcement-learning-for-data-center-cooling-optimization/

文章来源:engineering.fb.com

作者:Engineering at Meta

发布时间:2024/9/9 23:42

语言:英文

总字数:1740字

预计阅读时间:7分钟

评分:90分

标签:数据中心冷却,强化学习,能源效率,水资源优化,数据中心中的AI


以下为原文内容

本内容来源于用户推荐转载,旨在分享知识与观点,如有侵权请联系删除 联系邮箱 media@ilingban.com

  • We’re sharing more about the role that reinforcement learning plays in helping us optimize our data centers’ environmental controls.
  • Our reinforcement learning-based approach has helped us reduce energy consumption and water usage across various weather conditions in our data centers.
  • Meta is revamping its new data center design to optimize for artificial intelligence and the same methodology will be applicable for future data center optimizations as well.

Efficiency is one of the key components of Meta’s approach to designing, building, and operating sustainable data centers. Besides the IT load, cooling is the primary consumer of energy and water in the data center environment. Improving the cooling efficiency helps reduce our energy use, water use, and greenhouse gas (GHG) emissions and also helps address one of the greatest challenges of all – climate change.

Most of Meta’s existing data centers use outdoor air and evaporative cooling systems to maintain environmental conditions within the envelope of temperature between 65°F and 85°F (18°C and 30°C) and relative humidity between 13 and 80%. As water and energy are consumed in the conditioning of this air, optimizing the amount of supply airflow that has to be conditioned is a high priority in terms of improving operational efficiency.

Since 2021, we have been leveraging AI to optimize the amount of airflow supply into data centers for cooling purposes. Using simulator-based reinforcement learning, we have, on average, reduced the supply fan energy consumption at one of the pilot regions by 20% and water usage by 4% across various weather conditions.

Previously, we shared how a physics-based thermal simulator helps us optimize our data centers’ environmental controls. Now, we will shed more light on the role of reinforcement learning in the solution. As Meta is revamping its new data center design to optimize for artificial intelligence, the same methodology will be applicable for future data center optimizations as well to improve operational efficiency.

How cooling works at Meta’s data centers

Currently, Meta’s data centers adopt a two-tiered penthouse design that utilizes 100% outside air for cooling. As shown in Figure 1, the air enters the facility through louvers on the second-floor “penthouse,” with modulating dampers regulating the volume of outside air. The air passes through a mixing room, where outdoor air, if too cold, can be mixed with heat from server exhaust when needed to regulate the temperature.

The air then passes through a series of air filters and a misting chamber where the evaporative cooling and humidification (ECH) system is used to further control the temperature and humidity. The air continues through a fan wall that pushes the air through openings in the floor that serve as an air shaft leading into the server area on the first floor. The hot air coming out from the server exhaust will be contained in the hot aisle, through exhaust shafts, and eventually released out of the building with the help of relief fans.

Water is mainly used in two ways: evaporative cooling and humidification. The evaporative cooling system converts water into vapor to lower the temperature when the outside air is too hot, while the humidification process maintains the humidity level if the air is too dry. As a result of this design, we believe Meta’s data centers are among the most advanced, energy and water efficient data centers in the world.

Figure 1: The penthouse cooling system within Meta’s data centers.

In order to supply air within the defined operating envelope, the penthouse relies on the building management system (BMS) to monitor and control different components of the mechanical system. This system performs the task of conditioning the intake air from outside by mixing, humidifying/dehumidifying, evaporative cooling, or a combination of these operations.

There are three major control loops responsible for adjusting setpoints for supply air: temperature, humidity, and airflow. The airflow setpoint is typically calculated based on a small set of input variables like current IT load, cold aisle temperature, and differential pressure between the cold aisle and hot aisle. The logic is often very simple at a linear scale, but becomes very difficult to accurately model as these values at different locations in the data center are coupled to one another and highly dependent on complex local boundary conditions. However, the amount of airflow will largely dictate the energy used by the supply fan arrays and water consumption when cooling or humidification is required. Therefore, optimizing the airflow setpoint would have the greatest impact in regards to further improving the cooling efficiency given the fact that the temperature and humidity boundary of the operating envelope is fixed.

A reinforcement learning approach to data center cooling

Reinforcement learning (RL) is good at modeling control systems as sequential state machines. It functions as a software agent that determines what action to take at each state based on some transition model – which leads to a different state – and constantly gets feedback from the environment in terms of reward. In the end, the agent learns the best policy model (typically parameterized by a deep neural network) to achieve the optimal accumulated reward. The data center cooling control can be naturally modeled under this paradigm.

At any given time, the state of a data center can be represented by a set of environmental variables monitored by many different sensors for outside air, supply air, cold aisle and hot aisle, plus IT load (i.e., power consumption by servers), etc. The action is to control setpoints – for example, the supply airflow setpoint that determines how fast the supply fans run to meet the demand. The policy is the function mapping from the state space to action space (i.e., determining the appropriate airflow setpoint based on current state conditions). Now the task is to leverage historical data we have collected from thousands of sensors in our data centers – augmented with simulated data of potential, but not yet experienced conditions – and train a better policy model that gives us better reward in terms of energy or water usage efficiency.

The idea of using AI for data center cooling optimization is not new. There are also various RL approaches reported such as, transforming cooling optimization via deep reinforcement learning and data center cooling using model-predictive control.

However, applying the control policy determined by an online RL model may result in various risks including breaches of service requirements and even thermal unsafety. To address this challenge, we adopted an offline simulator based RL approach. As illustrated in Figure 2, our RL agent operates in a simulated environment by starting from real-life historical observations, S. It then explores the action space, feeding into the simulator to predict the anticipated new state S’ and reward, R, given each sampled action, A. From there it collects the pairs (S, A) that have the best reward to form a new training data set to update the parameterized policy model.

Figure 2: Our simulated-based offline RL approach.

Our simulator is a physics-based model of building energy use that takes as inputs time series such as weather data, IT load, and setpoint schedules. The model is built with data center building parameters, including geometry, construction materials, HVAC, system configurations, component efficiencies, and control strategies. It uses differential equations to output the dynamic system response, such as the thermal load and resulting energy use, along with related metrics like cold aisle temperature and differential pressure profiles.

The simulator plays a very important role here since our goal is to optimize energy and water usage while keeping the data center condition under specs so hardware performance isn’t affected. More specifically, we want to keep the rise in cold aisle temperature below a certain threshold, or a positive pressurization from cold aisle to hot aisle, to minimize the parasitic heat caused by recirculation.

Additionally, the physics-based simulator enables us to train the RL model with all possible scenarios, not only those present in the historical data. This increases reliability during outlier events and allows for rapid deployment in newly commissioned data centers.

Results of our RL approach

In 2021, we started a pilot at one of Meta’s data center regions – having the RL model directly controlling the supply airflow setpoint. Figure 3 shows a comparison of the new setpoint, in the unit of cubic feet per minute (CFM) as the red line to the original BMS setpoint (as the dotted blue line) over one week’s duration for illustration purposes.

Figure 3: A comparison of the RL model versus the original BMS setpoint.

The fluctuation is mainly determined by the supply air temperature and server load cycles at different times of day. More importantly, as shown in Figure 4, the data center temperature conditions never went out of spec, with reduced airflow supply with respect to both cold aisle average and maximum temperature compared against the supply air temperature.

Figure 4: A data center temperature profile under RL model control.

It is noticeable that the CFM savings vary under different supply air temperatures as the univariate chart in Figure 5 shows. The CFM savings can easily be converted to energy savings used by the supply fans. Under hot and dry conditions, when evaporative cooling or humidification is required, using less air will result in less water usage as well. Over the past couple years of the pilot, on average, we were able to reduce the supply fan energy consumption by 20% and water usage by 4% across various weather conditions.

Figure 5. A breakdown of airflow savings at different supply air temperatures.

Future work for AI in data center optimization

This effort has opened the door to transform how our data centers operate. By introducing automated predictions and continuous optimizations for tuning environment conditions in our data centers we can bend the cost curve and reduce effort on labor intensive tasks.

Meta is breaking ground on new types of data centers that are designed to optimize for artificial intelligence. We plan to apply the same methodology presented here to our future data centers at the design phase to help ensure they’re optimized for sustainability from day one of their operations.

We’re also currently rolling out our RL approach to data center cooling to our existing data centers. Over the couple of years we expect to achieve significant energy and water usage savings to contribute to Meta’s long- term sustainability goals.

Acknowledgements

We would like to thank our partners in IDC Facility Operations (Butch Howard, Randy Ridgway, James Monahan, Jose Montes, Larame Cummings, Gerson Arteaga Ramirez, John Fabian, and many others) for their support.