Skip navigation
The Ten Most Common Cooling Mistakes Data Center Operators Make
Air conditioning units at one of the facilities by European data center provider Interxion. (Photo: Interxion)

The Ten Most Common Cooling Mistakes Data Center Operators Make

Two data center cooling experts list things you’re probably doing wrong and paying for in wasted energy.

While data center operators are generally a lot better at cooling management than they were ten years ago, many facilities still face issues that prevent them from either using their full capacity or wasting energy.

Lars Strong, senior engineer at Upsite Technologies, a data center cooling specialist, says the ultimate goal in airflow management is to have better control of cooling temperature set points for IT air intake, while minimizing the volume of air you’re delivering to the data hall.

We asked Strong and Wally Phelps, director of engineering at AdaptivCool, another company that specializes in thermal management in data centers, to list some of the most common issues they see in data centers they visit. Here is what they said:

1. Phantom leakage: This is leakage of cold air from the plenum under the raised floor into adjacent spaces or into support columns. Phelps says such breaches are fairly common and cause everything from loss of pressure in the IT environment to allowing warm, dusty or humid air from elsewhere to enter. The only way to avoid this problem is to go under the raised floor, inspect the perimeter and support columns and seal any holes you may find.

2. Too many perforated tiles: There is absolutely no reason to have perforated tiles in hot aisles and whitespace areas. It is a waste of cooling capacity. It is also possible to have too many perforated tiles on the intake side of the racks. One red flag is lower-than-normal air temperature at the top of IT racks, Phelps said.

3. Unsealed raised-floor openings: While many data center operators have made an effort to seal cable openings and other holes in their raised floors, very few have actually finished the job, Strong says. The holes that are left over can cause a lot of cold air to escape into areas where it is unneeded. One particularly important place to look for unsealed openings is under electrical gear, such as power distribution units or remote power panels.

4. Poor rack sealing: Doing things like putting blanking panels in empty rack spaces is as a common-sense thing as it gets in airflow management, yet not everybody does it. Some cabinets are not designed where the space between mounting rails and sides of the cabinet are sealed. An operator who cares about efficiency will seal those openings as well as potential openings under the cabinet, Strong says.

5. Poorly calibrated temperature and humidity sensors: Sometimes vendors ship uncalibrated sensors, and sometimes calibration may go out of whack over time. This leads to ill-managed cooling units working against each other. Strong recommends that operators check both temperature and relative-humidity sensors for calibration every six months and adjust them if necessary.

6. CRACs fighting for humidity control: Another good way to pit two CRACs against each other is to return air at different temperatures to adjacent CRACs. As a result, CRACs get different humidity readings and one ends up humidifying while the other is dehumidifying the air. Fixing this problem takes some finesse in understanding of the Psychrometric chart and setting humidity control points thoughtfully, Phelps says.

7. Less is more: Many data center operators overdo it with cooling capacity. If there is more cooling than needed and no way to keep redundant CRACs off safely, the entire cooling scheme is compromised, since too many units are running in their low-efficiency state. This often happens when underfloor cooling temperature is high and certain racks are hard to keep cool, and a typical response from the operator is to bring more cooling units online. While counterintuitive, the correct response is actually running fewer CRACs at higher load, Phelps says.

8. Empty cabinet spaces: This is another one from the series of obvious but for some reason not considered by everyone. When one or more cabinet spaces are left empty, the airflow balance gets skewed, leading to recirculation of exhaust air into the cold aisle or loss of cool air from the cold aisle, Strong says. The condition naturally leads to a cooling scheme that overcools and supplies more air than is really necessary to compensate for the losses.

9. Poor rack layout: Ideally, you want to place racks in long hot-aisle/cold-aisle configuration, with main CRACs placed at each end of the rows, Phelps says. Having a small island of racks with no particular orientation does not help anybody. Neither does orienting racks front to back or orienting CRACs in the same direction as the IT rows.

10. Not giving cooling management the respect it deserves: As Strong puts it, failing to consider the benefits of improving the way you manage cooling leaves an operator with stranded capacity and higher operating cost. Benefits from a simple thing like installing blanking panels can cascade, but they are often overlooked. In extreme cases, a well-managed data center cooling system can even defer an expansion or a new build.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish