Skip navigation

Approaches to Data Center Containment

The data center is fraught with power and cooling challenges. For every 50 kW of power the data center feeds to an aisle, the same facilities typically apply 100-150 kW of cooling to maintain desirable equipment inlet temperatures. Most legacy data centers waste more than 60% of that cooling energy in the form of bypass air.

This is the second article in a best practices series to center energy efficiency through effective airflow management strategies

Legacy data centers employ a hot aisle/cold aisle arrangement of the IT racks. The fronts of the racks face each other and draw cold air into the rack to cool rack mounted IT devices (i.e. servers, switches, etc.). Conversely, the rear sides of the rows of racks face one another, expelling the hot air into the hot aisle. The issue with hot aisle/cold aisle designs is that the air is free to move wherever it will.

In a cold aisle containment approach, the data center installs end of row doors, aisle ceilings or overhead vertical wall systems to contain the conditioned air that cooling systems send into the cold aisles. This ensures that only that air flows into the air intakes of the rack-mounted IT devices. The data center contains the cold aisle to keep the cold air in and the hot air out.

In hot aisle containment, the hot aisle is contained so that the precision air conditioning units only receive hot air from the aisles. Again, the data center contains the hot aisle to keep the hot air in that aisle and the cold air out (see Figure Three). For more history of data center containment, download the complete DCK Guide to Data Center Containment.


Why Containment?
The data center is fraught with power and cooling challenges. For every 50 kW of power the data center feeds to an aisle, the same facilities typically apply 100-150 kW of cooling to maintain desirable equipment inlet temperatures. Most legacy data centers waste more than 60 percent of that cooling energy in the form of bypass air.

Data centers need more effective airflow management solutions as equipment power densities increase in the racks. Five years ago, the average rack power density was one to two kW per rack. Today, the average power density is four to eight kW per rack and some data centers that run high density applications are averaging 10 to 20 kW per rack.

The cost of electricity is rising in line with increasing densities. “The cost of electricity is about US$0.12/kWh for large users. The forecast is for a greater than 15-percent rise in cost per year over the next five years,” says Ian Bitterlin, Chief Technology Officer, ARK Continuity.

Containment makes existing cooling and power infrastructure more effective. Using containment, the data center makes increasingly efficient use of the same or less cooling, reducing the cooling portion of the total energy bill. Data centers can even power down some CRAC units, saving utility and maintenance costs. Containment allows for lower cooling unit fan speeds, higher chilled water temperatures, decommissioning of redundant cooling units and increased use of free cooling. A robust containment solution can reduce fan energy consumption by up to 25-percent and deliver 20-percent energy savings at the cold water chiller, according to the U.S. EPA.

Containment makes running racks at high densities more affordable so that data centers can add new IT equipment such as blade servers. Data center containment brings the power consumption to cooling ratio down to a nearly 1 to 1 match in kW consumed. It can save a data center approximately 30-percent of its annual utility bill (lower OpEx) without additional CapEx.

If you would like read the entire series in PDF format you can click here to download the complete Data Center Knowledge Guide to Data Center Containment,  courtesy of Eaton.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish