The more equipment you pack into a data center, the more heat the facility will generate. Without suitable data center cooling, this heat will lead to suboptimal server performance and shortened hardware lifespans.
To make matters worse, the Uptime Institute's annual report for 2024 reveals that cooling-related problems were the direct cause of 19% of all data center outages in the last three years. With so much at stake, it's no wonder cooling is a top priority for data center managers.
This article is a beginner's guide to data center cooling that covers everything you must know about keeping hardware at optimal temperatures. Jump in to learn how seasoned facility managers ensure excess heat never causes long-term issues for the data center.
What Is Data Center Cooling?
Data center cooling refers to the collective equipment, systems, and processes that regulate the facility's temperature, humidity, and airflow. Cooling systems dissipate the heat generated by servers, storage systems, networking hardware, and other equipment within a data center.
The primary purpose of data center cooling is to maintain suitable environmental conditions within the facility. The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) recommends that the temperature in data centers must always be within the range of 64°F to 81°F (18°C to 27°C).
ASHRAE also advises that humidity levels should be between 40% and 60%, depending on regional and environmental factors. Suitable humidity levels prevent the buildup of electricity (which damages sensitive components) and condensation (which leads to corrosion).
Data center cooling is vital to the day-to-day operations of the facility. Most devices in a data center produce heat and must get rid of it quickly to avoid performance degradation. Excess heat and humidity also cause equipment to malfunction and stop working, which can lead to full-blown outages either in some parts or the entire data center.
The costly consequences of cooling failures are driving the growth of the U.S. data center cooling market. Assessed to be worth around $4.31 billion in 2023, experts predict the U.S. data center cooling market will be worth $8.34 billion by 2029 (CAGR of 11.65%).
Types of Data Center Cooling Systems
Data center cooling systems range from traditional air conditioning to more advanced liquid-based or evaporative cooling systems. Each type has unique benefits and trade-offs, and selecting the appropriate cooling strategy depends on the facility's size, location, available budget, and equipment.
Air-Based Cooling
Air-based cooling systems circulate cool air through server rooms and expel hot air generated by IT equipment. These systems are the most common method for regulating temperatures in data centers. Here are the two go-to types of air-based cooling systems commonly used in data centers:
- Computer room air conditioning (CRAC). CRAC units use refrigerants to cool the air before distributing it throughout the data center to maintain optimal temperatures.
- Computer room air handlers (CRAHs). A CRAH unit uses chilled water from an external chiller to cool the air. Fans blow air over cooling coils filled with chilled water, and CRAH units circulate cooled air throughout the data center.
Many facilities combine CRAC or CRAH units with hot/cold aisle containment. This strategy requires you to arrange server racks in alternating rows. The fronts (intake side) of the two rows face each other, which creates a cold aisle. The backs (exhaust side) of the two rows face each other, which forms a hot aisle. Aisle containment reduces the mixing of hot and cold air, which improves cooling efficiency.
Additionally, most facilities that use air-based cooling systems rely on raised floors, perforated tiles, and ducted returns to direct cool air to the equipment and hot air away from it.
The main selling point of air-based systems is that they are highly affordable and widely available. These cooling systems are also simple to maintain, especially compared to liquid-based data center cooling. As for the downsides, these systems require a lot of energy, which drives up operating costs. Air-based systems can also struggle to cool high-density server environments or facilities with intensive workloads.
Liquid-Based Cooling
Liquid-based cooling systems use water or specialized coolants to absorb and dissipate the heat. Liquids have a higher thermal conductivity than air, so these cooling systems carry heat away from servers more effectively than air-based cooling.
Here are the main types of liquid-based cooling systems:
- Chilled water systems. These cooling systems circulate chilled water through pipes connected to cooling coils or directly into cooling units near the equipment. The water absorbs heat and recirculates through a chiller plant for cooling.
- Liquid immersion cooling. In liquid immersion cooling, server components are submerged in a thermally conductive, non-electrically conductive liquid. The liquid absorbs heat directly from the hardware and transfers it to an external cooling system.
- Direct-to-chip cooling. Using small tubes, a direct-to-chip cooling system pumps liquid directly to heat-generating components (e.g., CPUs, GPUs).
Liquid cooling is significantly more efficient than air-based cooling, especially in high-density data centers with computing workloads that generate a lot of heat (e.g., artificial intelligence, machine learning, HPC, cryptocurrency mining). Liquid cooling systems also take up less physical space than air-based systems, which frees up valuable floor space.
The major downside of liquid-based data center cooling is the high initial cost. The setup of liquid-based cooling systems is expensive due to the required infrastructure. Liquid cooling systems also introduce additional risks to the data center as staff members must prepare for potential leaks that could damage equipment.
Evaporative Cooling
Evaporative cooling uses the natural process of water evaporation to lower the temperature. When warm air flows through a medium soaked in water (such as a wet pad), the water evaporates, absorbing heat from the air and lowering its temperature. There are two common types of evaporative cooling:
- Direct evaporative cooling (DEC). A DEC system cools the air by direct contact with a water-soaked medium. As the air passes through a wet pad, it cools and becomes more humid before reentering the data center.
- Indirect evaporative cooling (IEC). An IEC system cools the air using evaporative processes, but the water doesn't directly contact the air entering the data center. Instead, heat exchangers transfer the coolness from evaporative processes to the incoming air.
Evaporative cooling uses significantly less energy than refrigerant-based air conditioning systems. Since it relies on water evaporation, the cooling process is more natural and requires less electricity.
However, these systems do not perform well in all climates. Evaporative cooling works best in dry, hot climates where water evaporates quickly, so these systems are not optimal in humid environments. DEC systems also introduce moisture into the air, which increases the risk of condensation inside the data center.
Free Cooling
Free cooling relies on naturally cold air to dissipate heat and is a popular cooling strategy in colder regions with low temperatures throughout much of the year. There are two common types of free cooling:
- Airside economizers. These systems bring cool outside air directly into the data center.
- Waterside economizers. These systems use cool outdoor temperatures to cool water, which circulates through heat exchangers to absorb heat inside the data center.
Free cooling drastically lowers energy consumption by reducing the need for mechanical cooling systems. The lack of mechanical cooling also leads to lower maintenance costs.
The main disadvantage of free cooling is that this strategy is highly dependent on ambient air temperatures. This type of cooling is not a viable option in warm climates. Bringing outside air into the data center also requires robust air filtration systems to prevent dust, pollutants, and contaminants from damaging sensitive equipment.
Hybrid Cooling
Hybrid cooling systems combine two or more different cooling methods (air, liquid, evaporative, and free cooling) into one integrated system. A hybrid cooling system automatically switches between cooling modes based on real-time conditions (e.g., current temperature or computing workload). The facility uses the most energy-efficient cooling system at any given moment.
One of the most common hybrid cooling setups is to combine air-based and liquid-based systems. During periods of low heat generation, the facility relies primarily on traditional air-based cooling. When heat increases, the system activates liquid-based cooling to more efficiently manage hot spots.
The hybrid approach adds redundancy to the cooling setup. If one system fails or becomes less effective (e.g., a free cooling system on a hot day), the hybrid system can automatically switch to another cooling method to ensure continuous operation.
As for the negatives, integrating multiple cooling methods increases the upfront setup costs. Hybrid systems require sophisticated controls, sensors, and infrastructure to switch between cooling methods seamlessly. Maintaining a hybrid system is also challenging because staff members must manage multiple cooling technologies.
The Cooling Components of a Data Center
Cooling components of a data center work together to ensure IT equipment runs in optimal settings. Here's an overview of the most common cooling components typically found in data centers:
- CRAC and CRAH units. CRAC/CRAH systems are the most common components of air-based cooling.
- Water chillers. Chillers use a refrigeration cycle to actively cool water, which enables precise temperature control.
- Cooling towers. A cooling tower uses evaporative cooling to passively remove heat, relying on evaporation to cool water.
- Piping and distribution systems. These systems transport coolant liquids from chillers to cooling units and back. They also carry hot water away from cooling components.
- Airflow management systems. These systems include containment aisles and airflow panels that direct cool air to server intakes and exhaust hot air away from the equipment.
- Sensors. Sensors monitor temperature, humidity, and airflow levels within the data center.
- Dampers. A damper controls the airflow within the data center by adjusting the flow of cool air and hot exhaust.
- Cooling control systems. These systems manage the operation of cooling equipment based on real-time data. Most data centers rely on a combination of automated and manual control systems.
- Humidity control systems. These systems regulate humidity levels to prevent condensation and static electricity.
- Pump systems. A pump system circulates coolants through the cooling loops in liquid-based cooling setups.
- Fans and blowers. These components move air within the data center. Fans and blowers supply cool air to racks and exhaust hot air away from the equipment.
- Heat exchangers. Heat exchangers transfer heat from one medium to another. These components are often used in liquid cooling systems to absorb heat from server equipment.
- Filtration systems. Filters ensure that dust, contaminants, and pollutants do not enter the data center during the cooling process.
What cooling components a facility has is one of the key factors you should consider during data center selection. The redundancy of these components (N+1, 2N, 2N+1) is also a key consideration in determining the tier of the data center.
How Data Center Cooling Works?
Data center cooling works by moving heat away from equipment and either dissipating it outside the data center or reconditioning it into cool air. Different cooling systems rely on different cooling methods:
- Air conditioning units (CRAC/CRAH) use chilled air to cool the facility. Cold air circulates to the racks, while hot air returns to the air handling units for cooling and recirculation.
- Liquid-based cooling systems use water to cool server racks. The liquid absorbs the heat from the equipment and carries it away to a chiller or cooling tower for heat removal.
- Evaporative cooling systems cool incoming air by evaporating water. Hot air passes through something wet, like a water-soaked pad. As the water evaporates, it cools the air. The cool air is then sent back into the data center to lower the temperature.
- Free cooling systems use outside air to cool the facility. The use of the local climate eliminates the need for traditional cooling methods.
What cooling strategy a data center relies on depends on the following factors:
- The facility's geographic location.
- The density of the IT equipment.
- Energy efficiency goals.
- Budget constraints.
- The specific operational requirements of the organization.
Once staff members set up a cooling system, they measure its efficiency with the power usage effectiveness (PUE) metric. PUE helps track the efficiency of cooling systems in relation to the power consumption of the entire facility.
Why Is Data Center Cooling Important?
Without sufficient cooling, data centers would be at constant risk of overheating, which could lead to equipment damage and costly disruptions. Let's examine the main reasons why data center cooling is so vital to facilities:
- No equipment overheating. Servers, storage units, and networking devices generate significant amounts of heat. Cooling systems ensure that equipment stays within safe temperatures to prevent performance degradation, data corruption, or component damage.
- Max system performance. Excessive heat slows down servers due to built-in mechanisms that throttle processing speeds in high-temperature environments. By regulating temperature, cooling systems ensure that equipment operates at maximum capacity.
- Less chance of downtime. Proper cooling keeps hardware temperatures stable and reduces the chances of equipment failures that lead to downtime.
- Prolonged equipment lifespan. Continuous exposure to high temperatures shortens the lifespan of electronic components. Cooling systems help extend the life of servers and networking gear, delaying costly replacements and reducing maintenance expenses.
- Lower humidity. Cooling systems control humidity levels to prevent static electricity buildup and condensation.
- Less environmental consequences. Efficient cooling reduces the overall carbon footprint of a data center.
- Better working conditions. Proper cooling helps maintain a safe and comfortable environment for human operators within the data center.
- Lower operational costs. Efficient cooling directly affects energy consumption. By optimizing the cooling process, data centers can reduce the power required to maintain stable conditions, which directly lowers energy costs.
Looking for tips on how to lower your IT costs? Check out our article on IT cost reduction for an in-depth look at 12 tried-and-tested strategies for lowering IT expenses.
Best Practices for Cooling a Data Center
Here are some best practices that help facility managers ensure all data center hardware runs at optimal temperatures:
- Implement hot/cold aisle containment. Always arrange server racks in alternating rows so that the front of each row faces another front and the backs face another back. That way, you prevent hot and cold air from mixing, which improves the airflow and limits the number of hot spots.
- Use blanking panels. Install blanking panels in empty rack spaces to block the escape of cold air and prevent hot air from being drawn back into the server intake.
- Optimize airflow management. Use raised floors with perforated tiles to direct cold air to equipment intakes and ensure return ducts effectively remove hot air. Keep cables and other obstacles from blocking airflow pathways.
- Monitor and adjust temperature and humidity levels in real-time. Use sensors and monitoring systems to track temperature and humidity levels continuously.Â
- Use energy-efficient cooling systems. Implement energy-efficient solutions like free cooling, liquid cooling, or evaporative cooling where possible. These methods reduce reliance on traditional air conditioning and lower operational costs.
- Maintain cooling equipment regularly. Perform routine maintenance for cooling units (CRAC/CRAH units, chillers, cooling towers, etc.) to ensure they're running efficiently. Address any potential issues before they have a chance to cause problems.
- Use variable speed fans. Instead of running fans at full capacity all the time, equip them with variable speed controls to adjust the airflow based on real-time cooling demand.
- Seal leaks and openings. Ensure that all gaps, holes, and open spaces in floors, walls, and ceilings are sealed. This precaution prevents cool air from escaping and hot air from reentering.
- Adopt liquid cooling for high-density areas. Use liquid cooling solutions for high-density equipment that generates more heat than air-based systems can handle efficiently.
How you plan to cool the facility is a key factor during data center site selection. The local climate, water availability, and energy costs directly impact your choices regarding the facility's cooling systems.
Challenges of Data Center Cooling
Below is a close look at the most common challenges data center owners face when setting up and managing cooling systems:
- High energy consumption. Cooling systems account for a large portion of a data center's energy use. Many facility managers struggle to maintain cost-effectiveness when cooling large or high-density data centers.
- Density-related problems. Managing heat becomes more difficult as data centers pack more equipment into smaller spaces. Many data center owners struggle to adjust cooling systems as the staff deploys more hardware and creates new hot spots.
- The efficiency-performance balance. Data center admins must strike a balance between energy-efficient cooling and maintaining optimal performance levels for IT equipment. This balancing is often challenging when facilities have fluctuating workloads.
- Geographical and climate constraints. Achieving energy-efficient cooling is harder and can lead to higher operational costs in regions with hot or humid climates.
- Scalability issues. Ensuring cooling systems scale efficiently without becoming a bottleneck is a constant challenge as data centers grow in capacity.
- High upfront costs. The initial investment for advanced cooling systems can be substantial. Hybrid systems are especially expensive as the organization must invest in two or more different yet completely functional cooling setups.
- Integration complexity. Many organizations face difficulties when integrating cooling systems with existing data center infrastructure, especially when retrofitting older facilities.
- Day-to-day maintenance issues. Cooling equipment such as chillers and cooling towers require 24/7 monitoring and maintenance. Managing this equipment without causing disruptions to data center operations is a common challenge.
Cooling systems consume a large portion of a data center's total energy, sometimes up to 40% of its total power usage. This heavy demand necessitates the need for smart and forward-thinking data center power designs.
Data Center Cooling FAQ
Below are answers to some of the most frequently asked questions concerning data center cooling.
How Much Does Data Center Cooling Cost?
The cost of data center cooling depends on the data center's size, location, cooling methods, and energy efficiency efforts. Yearly costs for enterprise-level cooling systems can easily reach hundreds of thousands of dollars, while hyperscale data centers can incur annual cooling costs in the millions.
How Cold Is Too Cold for a Data Center?
ASHRAE recommends data centers keep their temperature levels within the 64°F to 81°F (18°C to 27°C) range. Going below 64°F can lead to inefficient energy use and potential condensation issues.
How Much Water Is Used to Cool Data Centers?
The exact water usage depends on the cooling system. Large data centers can use millions of gallons per year if they rely on water-based cooling methods.
What Gas Is Used to Cool Data Centers?
Most air conditioning systems used for data center cooling rely on refrigerant gases like R-134a or R-410a.
What Is the Future of Cooling Data Centers?
The data center cooling industry is currently focused primarily on increasing energy efficiency and sustainability. Here's what you should expect to see in the next few years:
- Direct-to-chip and immersion cooling will grow in popularity in the coming years as servers become more powerful and heat-dense.
- There's an evident push toward free cooling methods that use outside air or water. The goal is to reduce reliance on traditional air conditioning to improve cost-effectiveness and shrink environmental footprints.
- Expect more facilities to start adopting AI technologies to optimize temperatures in real-time without human involvement.
- Data center owners will continue to experiment with renewable energy sources to lower costs and meet the growing demands of modern data centers.
- We will see more facilities adopt modular cooling systems the staff can easily add or remove without significant infrastructure overhauls.
- Data centers will increasingly look for ways to reuse waste heat for other applications. The most prominent ideas are heating nearby buildings or producing hot water in urban environments.
- Expect more facilities to experiment with phase change materials (PCMs). These materials absorb and release thermal energy during phase transitions, which allows for more efficient heat management and temperature regulation.
The push for greater sustainability is a noticeable trend in the data center industry. Learn more about what efforts are being made in the article on data center sustainability written by pNAP's CEO and founder Ron Cadwell.
Cooling Is the Backbone of Every Data Center
Effective cooling is a critical component of data center operations that directly influences performance, reliability, and energy efficiency. Now that you know what cooling methods exist and what they entail, you have all the information you need to make an informed decision on how best to cool your facility.