Top 7 Reasons Data Centers Don’t Raise Their Thermostats
October 22nd, 2013 By: Industry PerspectivesRON VOKOUN
In 2011, it was with great fanfare that ASHRAE released its updated Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance. The new guidelines created new classes of equipment ratings and corresponding wider ranges of operating conditions. Yet, here we are in 2013 and very few data centers are even raising their thermostats to the recommended limits prescribed by ASHRAE’s 2008 guidance.
Raising the thermostat is the single most simple energy saving move a data center can make, so why is it that they are so hesitant to do so? Generally speaking, raising the temperature setting 1.8°F (1°C) will save two to four percent on the overall energy use of a data center. What a great ROI for a simple flick of a switch!
As I often do when I have a question, I took to Twitter to find answers, or at least opinions. Specifically, I engaged Mark Thiele of Switch, Jan Wiersma of Data Center Pulse, Tim Crawford of AVOA, and Bill Dougherty of RagingWire in what became a spirited exchange of reasons why temperatures largely remain unchanged.
Without further ado, and with my apologies to David Letterman, I give you:
The Top 7 Reasons Why Data Centers Don’t Raise Their Thermostats
7. Some HVAC Equipment Can’t Handle Higher Return Air Temperatures
I will confess that I am not an engineer, but this one doesn’t make sense to me. I have been told by engineers in the past that the higher the return air temperature, the more efficient the system will be. I would be interested in hearing opinions, but until convinced otherwise, I’m going to call this one bunk.
6. Colocation Data Centers Have To Be All Things To All People
This one makes sense to me. Colocation providers can’t choose their customers, but rather they compete for them. If they have a potential customer that feels uncomfortable with the warmer temperatures, they will lose them to one of their competitors that keeps their data center unnecessarily cool. They also have to plan for the lowest common denominator in that many customers are still using legacy equipment that doesn’t fit into the ASHRAE standard classifications.
This makes me wonder if there might be the potential for a new colocation product. Given the energy savings, perhaps physically separated sections of the data center can be offered at a discounted rate in exchange for agreeing to operate at a higher temperature? This could be an attractive cost savings for a few enlightened souls.
5. Fear, Uncertainty, Doubt (FUD)/Ignorance
This one is very widespread throughout the industry. I am told that most colocation RFP’s from CIO’s specify 70°F (21°C). The industry is full of sayings like,” Nobody ever got fired for keeping a data center cold.” That may change if the CFO finds out how much money he can save by raising the temperature!
4. Intolerable Work Environment
I can say with confidence that I would not enjoy working in a hot aisle that’s reaching temperatures up to 115°F (46°C). With that said, construction workers in Arizona work in that heat every day during the summer. I’ll leave it to OSHA to say what’s appropriate here in the U.S. Jan Wiersma, who lives and works in Europe, informed me that the EU has a reasonable law for working in the hot aisle, so it can be done.
3. Cultural Norms and Inertia
I’ve always hated hearing,” Because that’s the way we’ve always done it.” But, for legacy data centers, this is often the case. A more reasonable excuse that also fits into this category is that it’s probably nearly impossible to change an SLA without opening up all of the other terms to renegotiation.
2. Concern Over Higher Failure Rates and Performance Issues
The good folks at the Green Grid have debunked this one adequately already. A presentation at the Uptime Institute Symposium earlier this year from representatives of ASHRAE’s TC 9.9 agreed. A good qualification that Mark pointed out is that consistent environmental conditions are important to realizing lower failure rates.
And the number one reason why data centers don’t raise their thermostats (drum roll please)…
1. Thermal Ride-Through Time
If a data center has an outage of some sort, having an environment with a lower temperature will provide a longer thermal ride-through time. This is magnified in a containerized data center solution where the total volume of conditioned air is very limited in comparison to a more traditional open data center.
It seems there are very few good reasons why you should not raise the temperature in your data center, at least a bit. At the end of the day, you need to understand your business and the risks associated with its’ data center operations and make an informed decision. If your analysis indicates you can, flip that thermostat up a bit higher and enjoy the money you save as a result.
Many thanks to Mark, Jan, Tim, and Bill for sharing their wisdom on Twitter! I highly recommend following them if you don’t already.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
Great article, Ron!
Data Centers by nature are risk adverse, and who can blame them? We have seen success with raising temperatures in the data center when it’s combined with aisle containment strategies, however hot aisle containment can leave you with high temps, as you note in #4. You can do a small bypass ducting/damper/grill into the hot aisle to allow some cool air into the environment when occupied to help that situation.
Higher return temps will help efficiency, but make sure that coils and systems are sized/selected for this. Issue debunked.
Raising the temperatures allows for greater number of hours of free cooling and therefore significant energy savings. The trick as we all know is to balance energy efficiency with minimal risk to uptime, and the comfort level of our clients. Raising the temps is a great idea if designed correctly.
We have instituted a process of “Post Occupancy Optimization” where our clients provide us with temperature/energy trending for a time period, and we work with them to make adjustments to system setpoints to save energy. Changes are typically made in small increments, but we’ve seen great results.
Very interesting, and as an air conditioining system designer, I come across one or more of these points almost daily.
I once calculated that the difference in thermal ride through time between running a 1000sq.m data centre at 21 deg C and 24 deg C was a paltry 7 seconds! Alternatively, our R&D facility has shown very large improvements in ride through time when cold aisle closure is implemented.
To bottom out point 7, most refrigeration systems rely on the cold refrigerant gas to keep compressor motors cool, so with extreme return air conditions, motors will run at higher than design temperatures and could “cook.” It should be emphasised that this is in extreme cases, and raising your setpoint by up to 5K (10 DegF) is not likely to get you into trouble, but will increase cooling efficiency greatly.
Reason number 7 is valid. My company manufactures several types of cooling systems for mission critical, including DX refrigerant based equipment. As return air temperatures rise the head pressure that the compressors operate against goes up also. With new refrigerants those pressures can approach (or even exceed) 600 psi. All DX equipment contains safety circuits to protect the compressor. Conventional, off-the-shelf, rooftop units and most CRAC units will have safety circuits designed around “normal” return temperatures of about 95 degrees F. At 105 to 115 degrees F those safeties will trip and shut down the compressors.
Good summary of the situation. While I have talked to more people this year who have done some temperature raising in their data centers than I encountered in my previous eight years of stressing benefits of hot and cold air separation, they still don’t come close to representing a majority. You have covered all the reasons I hear. One comment about note #7. High return temperatures are never going to be a problem with water-cooled CRAH units; however, it can be an issue with DX cooling units which will be limited in both maximum temperatures and delta T’s, though I understand that there can be some customized product flexibility when the application environment is predictable.
Tom WhitePosted October 23rd, 2013
Ron, I enjoyed your article and found it very informative. I just wanted to add my two cents regarding reason #7…Equipment efficiencies increase when the compressor has less work (lift) to do. The inlet side of the compressor see a low-pressure & low pressure gas and the discharge side sees high pressure & high pressure gas. Obviously, the volume is greater on the inlet side but reduced on the outlet side because has been compressed and this elevates the temperature, plus, as someone pointed out, the refrigerant often cools the motor some you have that heat, too. (This probably sounds familiar because you studied Boyle’s Law a few years ago, I’m sure.) On the surface, it would appear easy for a system to supply water or air at a higher temperature, however, the refrigeration circuit also has a component called an expansion device that is located between the condenser coil and the evaporator coil. Its job is control the amount of flow into the evaporator coil. There are many different types and many variations of each type so, for those who are still awake, but about too nod off, I won’t get into the pros and cons of each type and cut to the chase: Expansion devices require a certain amount of pressure differential to function properly. Someone mentioned that some of the new refrigerants operate at high pressures and they are correct, however, there would be no efficiency gain (for the most part) if the compressor operated with the same amount of lift, just so you could get the correct pressure differential across your expansion device. This is the same whether you are using dx or chilled water. DX systems have an advantage because they have less heat exchangers to impeded the transfer of energy while chilled water (which, oddly enough, chills water through dx) has a bit of a cushion because of the volume in the loop, expansion tank, etc., that provides a “cushion” of sorts for operation. I hope this cured everyone’s insomnia.
Adam MeyerPosted October 23rd, 2013
I agree with Mr. Kaler. My company is also manufacturer of custom HVAC equipment, condensing units and chillers. Off the shelf lowest cost equipment would struggle with high return temps because the refrigerant temperatures would exceed the manufacturers limits. Custom equipment can be designed to handle it by matching the components in the system together to still operate within the compressors safe envelope. You would of course have to use a manufacturer that has the ability to to do that and an understanding of how to balance the overall system together. Overall, great article and I appreciate the different perspectives.
Darren TPosted October 23rd, 2013
Hayes, Kaler and White and Meyer are all correct, but #7 is completely dependent on system design. Sorry Ian but water cooled CRAC units can also be subject.
Say a water/air cooled DX CRAC with a coil designed for a 20 deg TD and you start pushing 95 deg RAT, then the best supply air you will get is 75 deg, Oh never mind that the unit design RAT is 86 deg. This higher RAT in turn causes a higher discharge temp on the compressor and will actually put it out of the limits for the max temp the oil can handle and causes breakdown of the oil.
One also needs to look at the max return water temp their chiller can reasonably handle in a chilled water situation.
Keep in mind many CRAC units were designed with more of a mixed air environment, they weren’t designed around a hot and cold separation that many are moving to.
Jeff BPosted October 23rd, 2013
Yes, increasing operating temps CAN improve efficiency and save energy in the right environments given the right HVAC systems but this is a generalization. Raising DC temperature without regards to the cooling system may lead to less efficient cooling. Many times simply raising the temperature set point creates humidification and dehumidification cycling that can be costly and less efficient than operating at a lower temperature set point. A colleague of ours did a study and associated white paper on this issue and found that raising setpoint temperature alone did little to improve efficiency in the DCs that were studied.
Paul FlowerPosted October 23rd, 2013
Interesting points here to consider when looking at this option.
One area that has seriously hampered me trying this before, is the variation of temperatures across a DC hall in the cold aisles, caused by a number of factors. The decision and associated risk factors are then driven by the potential impact of raising the temp based on the hottest area in the hall, rather than the average temperature zones.
The effort to balance the temperature across the hall can be quite extensive.
I guess trying this in a co-lo site (point6) would be even more problematic?
Agreed it takes a custom approach to each application to integrate OSA free cooling, direct evap cooling, indirect evap cooling, and as a last resort in some climates use a small amount of Dx or chilled water supplemental for peak shaving.
If have more and more Colo clients embracing this concept and going away from CRAC style systems as they educate their clients on the energy savings potential, especially if the clients are paying the monthly electric bill.
Chillers, CRACS and CRAHs are becoming boat anchors and soon we will be selling operable windows instead of mechanical cooling systems for data centers.
I share your openness to consider any and all comments, but for the life of me can NOT align any of these stated concerns against the opportunity to raise temperatures in the data center by a few degrees. It seems like much of the discussion is about seeking wild changes in temperatures, 20 or 30 degrees or more. Too much of a good thing is usually bad rule applies here. In my opinion this is about balance. I can not see ANY of the reasons discussed above to be applicable in going from say 70-degrees to 72-degrees. Just that tiny change COULD affect savings quite a bit. (although the quoted savings really needs to be re-validated as it likely comes from Mark Monroe’s studies when he was as Sun Microsystems YEARS AGO!)
What I can offer is the contractual issues that might arise in commercial data center settings if operating temperatures are written into the contracts. This I see as a potential stumbling block. If tenant contracts are written that guarantee the temperature to be 70-degrees, with penalties for higher recorded values, then this is a business issue, not a technical one.
David SlomerPosted October 23rd, 2013
Hi .raising return air, is not bunk, think of it, as air flow through the computer grade coil , the typical crac unit is driven by return air, if no air movement , the system will be very inefficient and a lot of high tech hvac equipment will be wasted. this is why you need below floor unobstructed clearance ,min. 18 inches , and 8-9 foot ceilings.do not put all whips and data cables in hot air return row (this is commonly recommended )and will only lead to crac return air starvation, mix the cables in hot and cold rows , or data overhead, do not pile in metal troughs, do not create air dams. I am not a big believer in above dropped ceiling free return air plenums, for two reasons , it is next to impossible to align supply air perfs with ceiling returns, fm 2oo or other agents should discharge above dropped ceiling as well as below access floor and a full flood in room, and computer rooms are always changing and if , the argument is a ducted return ,you are set in concrete. two items ,air movement and no blockage, of course in row cooling is a new discussion and has its owns rules .
AlwynPosted October 24th, 2013
With the problem of higher return temperatures are easy to solve, dump the higher return air an imp ament a fresh air intake with electrostatic and bag filtering.
Ron, a good article and some interesting views from DC world.
I have to agree I have heard most of them.
As a manufacturer of CAC/HAC I can add
6, SLA are often set by an inlet temperature and particularly in Hosting/CoLo there is often a lot of customer legacy kit.
4. Higher return temps. I have had “don’t want it as it effects power cables, lighting fixtures and Power strips. This last one has been applied when chimneys/HAC get discussed.
1. Ride through. Mike H and Paul I agree.
Paul, containment would help with the issue you mention.
We had a customer who had installed CAC and then experienced a cooling outrage (circuit breaker issue).We were monitoring the room as we were doing a case study and there were no temperature issue for over an hour. Basically segregation of hot and cold did its job.
Segregation is a key aim for any DC and can allow small rise to take place. Dont forget 1 degree can be 5% on energy.
David SlomerPosted October 24th, 2013
Jeremy. Ride through is usually a ups/ mg set/ battery type term, most highly efficient rooms will exceed 80 degrees F in minutes ,and even higher temps at rack tops. 1 degree F at ground level may be 5 degrees at top of rack, and your 5 % savings may result in a quarter million in power supply, replacement. What room would last an hour with a total outage , do you mean one crac unit ? are you redundant N+1 ? Air movement is the key , unless you use remote return air sensors ( not recommended) your very expensive crac unit will lose money at low air movement and temp. I agree with you totally with hot and cold aisle separation. I was also a mfg. rep., but we evolved into a design build division, and I have seen many of the good points you are addressing.
Reason 5 will always be valid when data center operators live in fear of losing their jobs because the business won’t back them up if there’s a failure.
Don DoylePosted October 24th, 2013
While an increase in return air temperature will usually improve the performance of a refrigeration circuit, this may not result in an energy savings at the data center.
I found that modern servers requrie substantially MORE power when intake temperatures rise. The increase in fan speed accounts for about half the total power increase and the CPU the balance. If you office is full of older servers that do not control fan speed, you could likely save some energy with a temperature increase. I checked the power on several “modern” servers accross the temperature range and found the load increase would cancel all the energy saved on the cooling system so the net would be near zero.
Since there was no opportunity to save energy, I elected to run the system cooler as I’d rather there be less total heat in the office than more.
Also, I now install the evaporator directly onto the rear door of the cabinet so I can collect much higher temperature air than by moving the air to the evaporator. So, I get the super high delt T without a hot office.
Great comments by all! I really appreciate the thought that went into each. I like Rob’s comment regarding “Post Occupancy Optimization” and see the value of periodic commissioning to keep the data center optimized for the current conditions.
Adam Meyer made a comment about low cost equipment not being able to handle the higher return air temps, which reinforces my belief that we should focus on Total Cost of Ownership over CapEx. Buy equipment that matches the conditions under which you intend to operate your data center and reap the long term benefits of the savings. You get what you pay for. This does not address the issue with legacy data centers though. You have to address each data center on its’ own merits.
Thanks again to everyone for their comments!
david slomerPosted October 25th, 2013
Hi Don rear, or front door, ,is great, for small, dense, loads, But extremely expensive and maintenance, demanding, usually, with a remote, dx chiller. This is great, for experimental racks, or very hot racks, but is not generally used, let a few server fans run. Did you install the coils in the doors ?
Great article. Great Points! The only comment that I have is on #1 and this certainly could be a concern although if an adequate environmental monitoring system integrated into a back-up cooling system is in place, this shouldn’t be as much of an issue. If redundant cooling is not an issue, timely alarm notification will only allow for quicker resolution of the events that might result in temperature spikes.
Susan WilliamsPosted December 9th, 2013
NER has a new patented product called ‘Aurora’ that can allow you to see in *minutes* the impact of raising the temperature. You can also immediately see the impact of adding new equipment to a rack. Aurora utilizes a 69″ strip that attaches to the outside of any cabinet 40u or higher. The strip has 8 sensors spread along the length. You can see the effect of a change from across the room and identify exactly where in the cabinet the temperature has increased. You can easily move the strip to another cabinet at any time. With the enterprise version you can monitor multiple data centers or closets from your desktop, easily identifying potential air flow problems. Information available upon request.
You have hit on some excellent points Ron. So many people are caught into what they think is solid conventional wisdom. When in reality they are trying to push technology beyond limits that the standard commercial equipment was never designed to perform at.
The semi-conductor industry learned this long ago when they realized facility energy usage for cooling was a huge impediment in their manufacturing cost. They shifted to specialized designed air handling equipment and industrial control. In today’s mission critical market the cost of incident is even higher in a data center and could have a longer business impact.
So the easiest remedy today is treating the symptom by icing the server hall over providing the cure by looking for a hardened solution in a system that will have little to no cost of ownership over the next decade.
The irony is when you find that more money was spent on office furniture than the BAS/EPMS solution that is driving the survivability of their infrastructure and many times their business.
In working with those that have stepped into the world of robust technology I have learned that they sleep better at night, can work in their shorts in the halls if they so choose, have a much lower cost of ownership, lowest possible cooling cost and have sustainability numbers they can boast about.
Its a beautiful place to be.
Nice objective article.