Raise the Temperature, Fight the Fans

16 comments

Raising the temperature in the data center can save big money on power costs. But nudge the thermostat too high, and the energy savings can evaporate in a flurry of fan activity.

That was the takeaway from several presentations at last week’s Data Center Energy Efficiency Summit (DCEE) in Sunnyvale, Calif. The case studies documented the benefits of raising the temperature in a data center environment, which can help save on energy used for air handlers and the chiller plant. But they also offered data on increased activity by server fans, which kick on as the temperature rises, nullifying gains from a warmer server room.

No More ‘Meat Lockers’?
The presentations at DCEE, which was sponsored by the Silicon Valley Leadership Group, provide guidance for data center operators as the industry moves away from “meat locker” server environments. Companies like Google and Sun Microsystems have advocated raising the temperature to reduce the power required for cooling server-packed racks. The trend has also received a boost from ASHRAE, the industry group for heating and air conditioning professionals, which increased the top end of its recommended temperature range from 77 to 80 degrees.

In one case study, Cisco Systems (CSCO) said it expects to realize savings of $2 million a year by raising the temperature in its research labs. Cisco’s Chris Noland and Vipha Kanakakorn oversaw the proof-of-concept project, in which they raised the temperature in three research labs on Cisco’s San Jose campus. Most of the increases were implemented gradually, but in one lab the team hiked the temperature by two degrees per day for four consecutive days.

Raising Chiller Set Point
As the server room neared 80 degrees F (27 C), the Cisco researchers raised the chiller water set point from 44 to 46 degrees F (6 tp 7 degrees C). “Optimizing the room opened the door to raising the room temperature, which opened the door to raising the chiller temperatures,” said Noland.

Because of the number of research labs at Cisco, optimizing the server rooms in lab environments offers substantial savings. But some of the variables change in data centers filled with high-dfensity racks, as seen in two case studies examined higher temperatures as part of broader testing on data center efficiency.

The Chill-Off 2 team, which included technologists from Data Center Pulse and Lawrence Berkeley National Labs, found that energy use declined as the temperature in the cold aisle increased – until it hit 80 degrees. At that point, the trend reversed and power usage soared as server fans kicked on. “If the fans start running at higher temperatures, we lose all those savings,” said Bill Tschudi of LBNL.

Vali Sorell of Syska Hennessy Group presented a similar case study in which he evaluated cooling options for a financial client, testing five different configurations at a power density of 20 kilowatts per rack. Once the supply air exceeded 75 degrees, there was a six-fold surge in fan energy. “You’ve got to be really careful about that,” said Sorrel. “I think there’s a happy medium (between higher temperatures and fan energy).”

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

16 Comments

  1. Another consideration is causing an outage because of this. A warm data center gives your first responders less time to respond to a cooling or power issue. The temperature in a warm data center can quickly rise to unacceptable limits before your team can even identify let alone correct the issue. A cooler data center gives you more time to react in the event of partial or complete power or cooling outage. By raising the temperature you may save on your power bill but this may cost you in reliability and the ability to react when something unexpected happens. For some this may be worth the risk but that all depends on how much an outage and potentially damaged equipment would cost.

  2. Dan

    I would not buy rack space from a datacentre that would run such a fine line of up time and cost savings. The risk to me would be to high

  3. So, Robert, by your argument, cool data center temperatures were not established to protect server gear, but to allow DCManagers time to react to problems that their systems cannot handle? Seems like an excuse not to explore new technology or embrace energy savings potential.

  4. Hugh Redelmeier

    If the price of energy varies over time, it might make sense to lower the temperature when the price goes down and raise it when the price goes up. In effect, this is "banking" low cost energy.

  5. Suppose temperatures could be raised without consequence? Liquid immersion cooling with dielectric coolants would enable heat to be transferred passively to outdoor air without a single fan or blower. All heat is removed as saturated fluid vapor at atmospheric pressure. Fluid is only a few degrees cooler than junction temps so that utility of the heat is optimized if one should chose to use it.

  6. Mike C

    In my case our servers are running with nearly 100% CPU usage, turning the temp up would be very risky. I'm lucky enough to have the to have the cooling I need, in Government we don't have the replacement cycle the big boys do.

  7. @blw Most existing enterprise servers have an operational temperature range indicated in their product documentation. Its important to keep them within that range with a buffer in the event you loose cooling for a few minutes. The exploration of new cooling technologies should be done in the Engineering Lab of a hardware vendor. Not in a production data center with customer data at risk. Companies like Google push the envelope and do a lot of hardware work in their data center but to their credit they are doing some true engineering rather than just second guessing the engineers at their hardware vendor. Cranking up the thermostat in existing data centers is not engineering or new technology. It's just being cheap and cutting corners while putting your customers at risk.

  8. Chris

    blw-- your comment to Robert is unjustified. He's merely pointing out a risk (and a legitimate one, IMHO). Adopting new technologies for the sake of "progressiveness" without considering the risks is reckless. I wouldn't want my business managed that way. Why not an underground facility? The temperature is (generally) constant year-round. Or build in cooler climates? Solar energy is the overall solution. We have to get serious about solar.

  9. Gerry Creager

    My experience has been that disk casualties are the sentinal of thermal-induced failures, although better drive reliability has hiden this somewhat. While my standard 65F may be excessive, it proved Robert Chase's point several weeks ago when a chiller failure (pump impeller fracture) took the chiller down for 6 hours. We were able to respond in a timely manner, maintain critical systems and didn't see the room temperature rise above 80F. Mind you, if I'd gone into that failure at 80F, I couldn't have kept it down in any reasonable manner.

  10. Sandy

    "It depends". Raising temperatures increases your risk of overheating your servers only if your datacenter design and HVAC design can float through HVAC outages for any significant period of time. If you lose your blowers, a 30 KW rack of blade servers will overheat and shut down in 60 seconds at 90F, or 90 seconds at 70F. Will you really be able to respond differently between seconds 61 and 90? How critical are those servers? You need to pay attention to design, risks, and potential power savings or increases. That's called engineering. There are no cut and dried answers.

  11. @Sandy Most Blade center chassis's come standard with redundant blower modules and the management module is capable of speeding up the fans on the 2nd blower module and throttling the CPU's to make up for the lost blower until a replacement is installed. You would have to loose a first blower module and ignore it for your scenario to be a real issue. Even a closet sized room would take more than 90 seconds to reach a critical temperature if the room lost cooling. Power module and management module failures are more likely to take out a blade based chassis. I agree with you somewhat about every scenario being different, but thermodynamics and redundancy works the same in every data center regardless of what kind of equipment you have installed. Real Engineers leave room in their calculations for unforeseen events to occur without affecting availability.

  12. SM

    @Gerry: "While my standard 65F may be excessive, it proved Robert Chase’s point several weeks ago when a chiller failure (pump impeller fracture) took the chiller down for 6 hours." I'd say the cause of failure here wouldn't be the room's starting temperature, but the fact you had no redundancy. Assuming a Lead/Lag/Standby configuration, if lead and lag are running side by side, on a fault by lead, standby should startup, lag should become lead and standby become lag, in seconds or at worst minutes. Same for chillers, etc. If your cooling architecture goes away for multiple *hours* after a single device failure (of something other than an actual pipe rupture, say, on a single loop site), a change of setpoint seems the least of the worries.

  13. SM

    @Robert "Cranking up the thermostat in existing data centers is not engineering or new technology. It’s just being cheap and cutting corners while putting your customers at risk." Actually, in my experience, most datacenters are running in such a way that the supply air to many/most servers is at the bottom or below the bottom of the ASHRAE and manufacturers levels, because people don't manage their airflow well. If people would just use blanking panels, deploy decent hot and cold aisle isolation, grommeted tiles, etc, then having a *consistent* supply air temp of 65F to all servers (even if the return air temp is say 90F) is much better than having some servers at 55F just to make sure the 'hot server' at the 'top of the rack' gets 70F air. And when you can manage air flow and temperature like that, you can raise the chilled water setpoint 2 or 4 degrees and gain 5-10% efficiency on your chillers, and you can raise the average return air temps, letting your CRACs or AHUs operate with no or at least less dehumidification losses and a higher delta T, giving you more gains on that side. Yes, if you run around with a thermometer, your 'coldest' supply air temp is now 'higher' than the 'coldest' supply air temp used to be, but *all* of the servers would be in the proper range, and the savings on energy will pay for the panels, grommets, and all the other things that were bought. Also, redundancy is easier to manage, as you know where the air is going to go when you do have a CRAC fail or a AHU shut down, as opposed to a data center with a solution of "Just throw more and colder air out there" which experiences completely unexpected and seemingly random hotspots during PMs o r unit outages. At the end of the day, if you're cranking up your thermostat *for the right reasons*, you're more likely to be taking care of both your gear and the customers, *and* saving money and resources.