Most Users Resist Warmer Data Centers

8 comments

Zahl Limbuwala of BCS discusses temperature ranges for data center operations during the Google European Data Center Efficiency Summit held in Zurich on Tuesday.

ZURICH – Since 2008, the largest players in the data center industry have been advocating operating server rooms at warmer temperatures. Google, Yahoo, Microsoft and Facebook have all embraced the idea of raising the thermostat in their data centers, saying the benefits of reduced spending on cooling can be substantial, and the practice hasn’t led to elevated hardware failures.

But few enterprise data centers are following suit, according to Zahl Limbuwala, chairman of the Data Center Specialist Group at BCS, the leading IT industry group in the UK.

“We’re still stuck at low temperature points,” said Zimbuwala, who gave a presentation at the Google European Data Center Efficiency Summit Tuesday in Zurich, Switzerland. “All the work the industry has done on this issue still needs to roll through. It really hasn’t had the impact we thought it might.”

Data Centers Weigh Risk vs. Reward

Most data centers operate in a temperature range between 68 and 72 degrees F (20 to 22 degrees C), and some are as cold as 55 degrees F (!2 degrees C). But Google and others have increased the data center temperature to 80 degrees F (about 27 degrees C).

Raising the baseline temperature inside the data center – known as a set point - can save money spent on air conditioning. By some estimates, data center managers can save 4 percent in energy costs for every degree of upward change in the set point. But nudging the thermostat higher may also leave less time to recover from a cooling failure, and is only appropriate for companies with a strong understanding of the cooling conditions in their facility.

The leading U.S. industry group for heating and cooling professionals, ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) increased its recommended operating range for data centers from 77 degrees to 81 degrees in a 2008 update.

Limbuwala reviewed data from a survey of BCS data center operators, which found that the majority of data centers continue to operate at 22 degrees C (72 degrees), with most of the remainder between 20 and 24 degrees C.

ASHRAE has just released new, expanded guidelines for server rooms based on the type of equipment and application, and the level of control the operator has over the environment. For data centers running mission-critical operations, ASHRAE has maintained the 2008 upper range of 80.6 degrees. It has also created several new categories for operators of data centers who have strong control over the environment and manage  reliability using groups of networked data centers, such as Google or Microsoft or Yahoo. The upper range for those facilities can now range as high as 113 degrees F (45 degrees C).

Impact on Legacy Facilities and Equipment

The new guidelines draw clear distinctions between different types of facilities, according to Don Beaty, founder of DLB Associates and active member of ASHRAE Technical Committee 9.9. Beaty said many data centers covered by the ASHRAE guidelines are either older facilities or running legacy equipment that may not tolerate warmer environments as well as new equipment.

Harkeeret Singh of Thomson Reuters, who gave a presentation in Zurich on behalf of The Green Grid, emphasized the need for data center operators to be closely monitoring and managing conditions within their server rooms.

“Widening the temperature is more relevant for new data centers,” said Singh. “We can’t raise the temperature in the data center without dealing with airflow management. First do all the airflow management tasks, then raise the temperature.”

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

8 Comments

  1. Jeff

    The bottom line is that not everyone runs a data center with expendable servers. Microsoft, Google, Yahoo, Facebook, etc. all have a thousand spares ready to spring into action should one server fail. For enterprises that are more heavily reliant on every last piece of their IT infrastructure, or for those that want to maintain a very generous safety cushion in the event of a crisis, the setpoint game is a hard sell. What Google and the rest should be advocating is an approach to better rationalize extremely-critical (like processing trades at a stock exchange) vs. redundancy-ready (like providing web-email or cloud style file storage) services. Once that is tackled, the expected availability of each sphere can be determined independently and enterprises that once required 100% (or best-effort) uptime can dedicate more resources to the critical services and start to cut back and see efficiency gains on the less critical ones.

  2. jeff h

    Its also an issue of user comfort - no one wants to spend all day in a working environment that is 80 degrees. Its just not feasible. The data center still needs people working in there - until you have robots to do it all you will see the upper limit to most places being 75-78 degrees for human comfort issues.

  3. So does that mean we gotta allow our NOC staff to wear shorts and Hawaiian shirts all the time? Wait... they do that anyway.

  4. We don't run expendable servers, but we do replace servers every 3-4 years. All our server vendors provide full repair/replace coverage during that time at temperatures up to 90 F or higher. The last quotes from Harkeet Singh are exactly correct: “We can’t raise the temperature in the data center without dealing with airflow management. First do all the airflow management tasks, then raise the temperature.” In our old, poorly designed data center, we can't raise room temperature above 68 F without creating hot spots higher than 90 F. We also can't fix airflow. Our new data center will have tightly controlled airflow. We plan to run with lots of free air cooling with a max temp of 80 F, maybe 90 F. Most of the time we should be well below 80 F, based on local weather.

  5. Change in every issues in our lives take time, (also in IT people) human is comfortable in a range of temperature, so we think that IT equipment too. Legacy facilities with no physical updating for enhace airflow are a common situation (a good business opportunity indeed!); so "spreading the word" will take two or three years more before the percentage of "high OPE temperature adopters" raise. But -at least in Mexico- the rfurbish projects are becoming more common from each ten I found at least 6 ask for cooling solutions two years before the figure was 2 of each ten. .

  6. Andy Dewing

    Data centres have to be designed specifically to take full advantage of the new cooling philosophies. They are space hungry and retro-modification to older facilities is simply not easily done if at all possible especially if uninterrupted operation is the order of the day as it usually is. In the colo world its not even possible in very many cases to get tenants to improve their air management even if they could as there is little or no obligation in SLA's to do so. Consequently users, owner, operators are frequently stuck with what they built and can't even if they wanted to get set points up sometimes simply because of localised hotspots because of poor air management and bad equipment layouts. The uptake of the new world will gradually improve as these older data centres are either obsoleted or rebuilt. However, this is a phase of the industry business model we are only now starting to have to think about.

  7. Paul Peoples

    I think a closer look at the expanded ASHRAE guidelines is required. It shows that variable fan speed for servers increases as inlet temperature increases, resulting in as much as a 25% increase in overall server power consumption. So, there is an inflection point where overall energy consumption will actually increase as air conditioning energy consumption is decreased. And that consumption is on the most expensive portion of the data center: UPS capacity.

  8. @Andy Our new data center will be 1/3 the size of the data centers it replaces, and about 1/2 the capital cost to build out, and 2/3 the operational cost to run. Efficient data center design can be cheaper than traditional design. Many data centers can benefit from simple, inexpensive airflow management changes with little to no impact on operations. The design knowledge is out there, but hard to find in one place.