HVAC Group Says Data Centers Can Be Warmer

For some time now, leading players in the data center industry have been raising the temperatures in their data center, savings hundreds of thousands of dollars in cooling costs in the process. The list of companies singing the praises of savings through higher baseline temperature settings in the data center includes Google, Intel, Sun and HP.

The leading industry group for heating and cooling professionals has now joined the choir. The American Society for Heating, Refrigerating and Air-conditioning Engineers (ASHRAE) this week expanded its recommendations for ambient data center termperatures, raising its recommended upper limit from 77 degrees to 80.6 degrees.

Data center managers can save up to 4 percent in energy costs for every degree of upward change in the baseline temperature, known as a set point. The higher set point means less frequent use of air conditioning, which saves the enrgy used to run cooling systems.

Running your data center warmer also raises the potential for “hot spots” to form in areas where cooling airflow doesn’t reach an entire rack. That’s why it’s a good idea to implement advanced monitoring of rack temperatures and data center airflow before nudging the set point higher.

ASHRAE approved the changes to its temperature recommendations this week at its meeting in Chicago. Mark Fontecchio from Tech Target has additional coverage of the ASHRAE decision, along with discussion of several impacts of higher set points: more noise in the data center as server fans become more active, and toasty conditions in the hot aisle, which is often 30 degrees warmer than the cold aisle.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. I talked to one large data center manager that claimed to have tried that. He said that the server cooling fans ran faster, drew more power, and negated any energy savings. The 60 watt increase in fan power in the ASHRAE doc doesn't sound like enough of a power increase to negate the significant cooling savings.

  2. aelarsen

    only segregation of heat from servers works to reduce tonnage required as well as pump hp,air handler hp,smaller chiller and smaller cooling tower as well as smaller piping and duct systems.cal tech is working to get the metrics of segregated heat from servers and this will change the equation on hvac for data centers.there are green data systems out there that segregate and remove heat without returning it to the hvac cycle and this is the approach that will be adopted going forward.

  3. Joe Miller

    This is very similar to comfort cooling debates that have been going on for twenty years. Fans speed has a grater impact to over all system efficiency than chilled water temperature. I had to convince our HVAC vendor that with our aisle containment design we could elevate the chiller water temperature and not increase the fan speed. Only aisle containment will allow full advantage of the energy saving potential. Instead of me tying to explaining it all, here are some whitepaper links one dates to 1991. http://www.trane.com/commercial/library/vol241/enews24_1.pdf http://www.trane.com/commercial/library/en20-2.pdf http://www.trane.com/commercial/library/vol29_2/enews_29_2_042400.pdf

  4. George Ross

    I have been involved with Data Center operations, at some level, for over 25 years now, and this has been an ongoing discussion. Saving money on the cooling energy is something that the corporate financial guys are always going to be interested in. The problem/challenge is identifying where those savings are really coming from, if at all. The one comment earlier about cooling costs being offset by greater fan speeds is one aspect of looking at where the "real" costs are. My question is how did that data center manger know that the fans were pulling that much more power? I'm not saying it didn't happen, I just don't know how it would be effectively/accurately be measured. Without accurate measures it is all simply anecdotal information and not real data. I believe the equipment that is going into data centers and network hubs is capable of sustaining higher temps so going to 80.6 degrees (F) is probably not going to bring anyone down, but it does cut down on the cushion that most managers in operations like to work with.

  5. Robert Prophet

    THe temp upper limits of 80.6 is a good temp. I set my problem point temp line at 83 F and my overall data center low temp at 67/68 F. This seems to keep condensation at a minimum. I find high physical access into and out of adjoining warmer areas to be a major problem with maintaining temp.

  6. Charles

    So when we're talking about raising the temp up to 80 degrees, are we talking about the room temp for each of the zones within the data center? Are we talking about the set point for the AC units? Also, when we talk about set points for AC units, where is the AC unit reading in the temperature... Is it reading it in from the drop ceiling area, from the room, from under the raised floor? Trying to educate myself.