ASHRAE: Warmer Data Centers Good for Some, Not All
October 5th, 2012 By: Rich Miller
EATONTOWN, N.J. – Don Beaty has built some of the world’s most efficient data centers. Between 2004 and 2011, His firm, DLB Associates, helped design eight of Google’s largest data centers, including facilities that slashed cooling costs by operating their IT equipment at warmer temperatures, sometimes up to 80 degrees F.
During the same period, Beaty also has been responsible for crafting recommendations on data center cooling for the leading industry group for heating and cooling professionals. Those dual roles have provided Beaty with a unique vantage point on the evolution of new strategies to cool servers – implementing cutting-edge techniques for the industry’s leading innovator as his “day job,” while working to develop standards and recommendations that can work for a broad spectrum of data center operators.
In both roles, Beaty has grown accustomed to managing the heat. This week marks the release of the latest guidelines on data center cooling from ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers), which reflect the growing momentum for operating servers at higher levels of temperature and humidity. “Thermal Guidelines for Data Processing Environments” is published by ASHRAE’s Technical Committee (TC) 9.9, which was co-founded by Beaty and IBM’s Roger Schmidt to provide specialized guidance on data center cooling.
Different Needs for Different Users
The new ASHRAE publication recognizes the growing gap between the Googles of the world and the many corporate data centers housing a more diverse range of hardware and applications, outlining four new classes of data centers, each with two tiers of temperature management ranges (“recommended” and “allowable”). Significantly, it also provides data on how changes in temperature impact hardware reliability.
“The most valuable update to this edition is the inclusion of IT equipment failure rate estimates based on inlet air temperature,” Beaty said. “These server failure rates are the result of the major IT original equipment manufacturers (OEM) evaluating field data, such as warranty returns, as well as component reliability data. This data will allow data center operators to weigh the potential reliability consequences of operating in various environmental conditions vs. the cost and energy consequences.”
Raising the baseline temperature inside the data center can save money by reducing the amount of energy used for air conditioning. and can allow expanded use of free cooling (the use of fresh air instead of air conditioners to cool servers). But many data center operators have resisted this strategy, fearing that higher temperatures would lead to more hardware failures for expensive servers and storage gear.
Servers Can Take the Heat
That’s why most data centers operate in a temperature range between 68 and 72 degrees Fahrenheit, and some are as cold as 55 degrees. But it turns out that servers are tougher than widely believed, and can perform effectively at higher temperatures – a fact that was understood by many server manufacturers, and has been documented by a series of studies in recent years. That’s why Google and others have increased the data center temperature to 80 degrees F (about 27 degrees C).
The first edition of the ASHRAE guidelines in 2004 created a recommended temperature upper limit of 77 degrees Fahrenheit. The second edition in 2008 recommended an upper limit of 81 degrees.
Some in the data center industry have asserted that ASHRAE TC 9.9 wasn’t moving fast enough to recognize the potential gains from higher temperatures. Beaty says that it’s been tricky to parse the benefits of warmer server inlet temperatures and free cooling, and to whom they best apply. ”The area where we failed in communications is that through 2008, our numbers applied to legacy equipment,” said Beaty.
More Granular Guidance
The new edition of “Thermal Guidelines” defines four classes of data centers, with recommendaded and allowable temperature ranges for each class.
“This third edition creates more opportunities to reduce energy and water consumption but it is important to provide this information in a manner that empowers the ultimate decision makers with regards to their overall strategy and approach,” said Beaty. “The idea is to provide objective data, methodology and guidance, but at the same time, respect the right of the data center designers, owners and operators to optimize the operating environment of their data center based on the criteria most important to their business needs.”
The ASHRAE Recommended range “is a reliability statement,” said Beaty. “It turns out that all IT equipment is also tested to a different range, called Allowable. The Recommended and Allowable ranges have been around, and apply to legacy equipment.
“It’s good to understand what’s allowable,” he added. “But the first thing to understand is that there’s no risk in going to the limit of the recommended range.”
Many End Users Moving Slowly
Only a small percentage of data center operators have been willing to do that, Beaty says.
“I think we’re close to being able to say that only the late adopters are at 70 degrees,” he said. “My sense is that there’s a growing number of people in that 73 to 75 degree range. Many people are moving off the (2004 recommendations), but not coming close to (current ASHRAE recommended ranges). I think the majority are getting their toes wet. The adoption is going to happen because of the financial pressure on cutting costs, including air conditioning. The pressures are building.”
And for the largest data center operators – the Googles of the world? Beaty expects that the largest cloud builders will continue to innovate and test the boundaries of cooling efficiency, pushing above the recommended ranges.
“We can confidently say that if you have a lot of hours above that range, it could be that you don’t need air conditioning,” said Beaty. “In fact, all that AC could be a failure point.”
Thank you for the article.
A couple of questions for you:
Some of our customers have increased the set point temperature and in some cases the hardware has had an increase in fan speed. This has resulted in an increase in IT load.
Have you encountered this before?
And, even though the power saving has come as a result of reduced CRAC load, have you seen an impact this may have had in the return/replacement of hardware due to fans running faster than before?
Over IP Group
We first wrote about this dilemma back in 2009 (Raise the Temperature, Fight the Fans), as studies identified fan activity once the inlet temperature hit 80 degrees, offsetting the benefits of the higher set point. Facebook has since filed a patent for a solution in which it uses load balancers and automation to address fan activity. Microsoft has pursued a different approach, seeking to eliminate fans altogether – not something every company can manage effectively.
Being in a colocation facility, It’s difficult to turn the thermostat up a little. Knowing that there’s no risk in going to the limit of the recommended range, though, may help many make the decision to go up one or two degrees. Every little bit helps. The elimination of fans and cooling requirements sounds like a great goal too!
Here is a description of the ASHRAE Thermal Guidelines explaining the recommended and allowable rack inlet temperature ranges.