Skip navigation

Modeling Data Centers Without Mechanical Cooling

The data center industry is seeking to create a data center model that requires little to no mechanical cooling, writes Daniel Kennedy of Tate. He points out that ASHRAE’s 2011 Thermal Guidelines for Data Center Processing Environments drastically expanded the allowable temperature range for data center operation and is leading to more discussion of how to reduce or eliminate mechanical cooling.

Daniel Kennedy is the senior sales engineer at Tate, Inc., a manufacturer of raised floors and airflow management solutions.

DanielKennedy-TateDANIEL KENNEDY
Tate

A trip to your next data center trade show or conference will bring into sharp focus the desire of the data center industry to create a data center model that requires little to no mechanical cooling. ASHRAE’s 2011 Thermal Guidelines for Data Center Processing Environments drastically expanded the allowable temperature range for data center operation. In particular, the A3 and A4 classification looks at scenarios when the data center operator and design team are planning for a true non-mechanical cooling system data center and using only air side economization.

The Benefits

The lack of capital outlay for expensive chilled-water systems or air-cooled CRAC units dramatically reduces the initial cost to build a new data center. In addition, the continued operational cost of the space is much lower, due to the lower energy cost of operating an air-side economized data center cooling system versus a large mechanical plant, even one with the most efficient design.

Good arguments also suggest that an air-side economized data center could prove to be more reliable and better able to handle variable loads introduced to the data center space by newer IT equipment. The improvement in reliability can come from the instantaneous startup capability of an airside economization system. This can provide immediate or continuous cooling during a power failure, as larger chilled-water plants typically cannot offer this capability during switchover to generation, due to compressor time outs and other factors. The ability to handle variable-load profiles has also improved, since high levels of step loading or load shedding that may wreak havoc on mechanical cooling systems are simply not an issue with an air-side economization system.

The Other Side of the Coin

The biggest drawback identified in conversation revolves around the people issue. If a data center is to operate in an A3 or A4 classification environment, “cold” aisle temperatures could reach 113F (45C). Then, depending on the delta T of the IT equipment, which could exceed 50F (28C), hot aisle temperatures could exceed 163F (73C). Clearly, these aisle temperatures are extremes, but inside the allowable classifications, and with recent heat waves throughout the U.S. this summer, those hot aisle temperatures are quite possible, even in temperate climates. To operate in these environments, standard “cold”/”very hot” aisle arrangements will not work, even with physical aisle containment.

Planning for Extremes

A strategy for delivery of outside air in a separate plenum, ideally below the floor, to the IT equipment, and then return in a separate plenum in the form of a drop ceiling, would protect the data center operator and any others from these extreme environments. The remainder of the data center space could be comfort cooled like any other building space, allowing for comfortable people and efficiently cooled IT equipment.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish