Skip navigation

Improving Cooling Systems Efficiency

Of all the factors that can impact the energy efficiency cooling represents the majority of facility related energy usage in the data center, outside of the actual IT load itself. This article is an part of the DCK Executive Guide to Energy Efficiency and the fifth article in a 5-part executive education series on Energy Efficiency.

Of all the factors that can impact the energy efficiency cooling represents the majority of facility related energy usage in the data center, outside of the actual IT load itself. This article is an part of the DCK Executive Guide to Energy Efficiency and the fifth article in a 5-part executive education series on Energy Efficiency.

While there are several variations on cooling systems, they generally fall into two categories, the Computer Room Air Conditioner “CRAC” wherein each unit has its own internal compressor, and Computer Room Air Handler “CRAH” which is primarily a coil and a fan which requires externally supplied chilled water. From an energy efficiency viewpoint, the CRAH which is usually supplied by a water cooled central chilled water plant, is more efficient than an air-cooled CRAC units. However, the air-cooled CRAC unit has one advantage over a centralized chiller system; they are all autonomous and therefore offer inherent redundancy and fault tolerance, in that there is no single point of failure (other than power failure).

Regardless of the type of cooling system, the amount of cooling required and therefore the energy required is reduced if the data center temperatures can be increased. Moreover, tightly controlled humidity is another area where a lot of energy is used, in many cases quite needlessly.

So what does this mean to the data center facility and its cooling system design and operation? Data centers have historically kept very tight environmental conditions to help ensure the reliability of the IT equipment. This was originally driven by older equipments susceptibility to temperature and humidity changes as well as a very narrow range of “recommended” environmental condi¬tions mandated by the equipment manufacturers them¬selves. (Download the complete DCK Executive Guide to Energy Efficiency for more details on the ASHRAE Expanded Thermal Guidelines).

In 2011 ASHRAE in conjunction with the consensus of major IT equipment manufacturers radically hoped to change the direction of the data center industry’s view toward cooling requirements by openly stating that: “A roadmap has been outlined to facilitate a significant increase in the operational hours during which economizer systems are able to be used, and to increase the opportunity for data centers to become “chillerless,” eliminating mechanical cooling systems entirely, in order to realize improved Power Usage Effectiveness (PUE).”

The 2011 version ASHRAE’s guidelines, openly endorsed “free cooling”. This would have been considered heresy by many only a few years ago, and some are still in shock and have difficulty accepting this new outlook toward less tightly controlled environmental conditions in the data center.

The opportunity to save significant amounts of cooling energy by moderating the cooling requirements and the expanded use of “free cooling” is enormous. However, due to the highly conservative and risk adverse nature of the industry this will take a while to become a widespread and common practice. Clearly some have begun to slowly explore raising the temperatures a few degrees to gather some experience and to see if they experience any operational issues with the IT equipment. Ultimately, it is a question of whether the energy (and cost) saved, is worth the risk (perceived or real) of potential equipment failures due to higher temperatures (and perhaps wider humidity).

There are clearly some legitimate reasons to keep lower temperatures; the first is a concern of loss of thermal ride-through time in the event of a brief loss of cooling, this is especially true for higher density cabinets, where an event of only a few minutes would cause an unacceptably high intake IT temperature. This can occur during the loss of utility power, and the subsequent transfer to back-up generator, which while it typically takes 30 second or less, will cause most compressors in chillers or CRAC units to recycle and remain off for 5–10 minutes or more. While there are some ways to minimize or mitigate this risk, is a valid concern.

The other concern is also another common issue; the wide variations in IT equipment intake temperatures that occur in most data centers due to airflow mixing and bypass air from less than ideal airflow management. Most sites resort to over cooling the supply air so that the worst areas (typically end-of-aisles and top of racks) of higher density areas do not overheat from re-circulated warm air from the hot aisles.

However, if better airflow management is implemented to minimize hotspots, it would allow intake temperatures to be slowly raised beyond the conservative 68–70°F. This can be accomplished by a variety of means such as; to the spreading out and balancing rack level heat load, and adjusting the airflow to match the heat load, as well as better segregation of hot and cold air via blanking panels in the racks and the use of containment systems. If done properly, it is more likely that within one to two years, 75–77°F in the cold aisle would no longer be a cause for alarm to IT users. The key to this is to improve communications and educate both the IT and facilities management about the importance of air management and the opportunity for energy savings, without reducing equipment reliability.

For the complete series on data center energy efficiency download the Data Center Knowledge Executive Guide on Data Center Energy Efficiency in a PDF format compliments of Digital Realty.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish