Google: Raise Your Data Center Temperature

23 comments

The biggest players in the data center industry are raising the thermostats in their data centers, with some saving hundreds of thousands of dollars in energy costs in the process.

The latest company to focus attention on temperature in the data center is Google. “The guidance we give to data center operators is to raise the thermostat,” said Erik Teetzel, an Energy Program Manager at Google. “Many data centers operate at 70 degrees or below. We’d recommend looking at going to 80 degrees.”

Most data centers operate in a temperature range between 68 and 72 degrees, and some are as cold as 55 degrees. Raising the baseline temperature inside the data center – known as a set point - can save money spent on air conditioning. Data center managers can save 4 percent in energy costs for every degree of upward change in the set point, according to Mark Monroe of Sun Microsystems, who discussed data center set points at a conference last year. But nudging the thermostat higher may also leave less time to recover from a cooling failure, and is only appropriate for companies with a strong understanding of the cooling conditions in their facility. 

“The first thing you should do is make sure you know what your airflow looks like,” said Google’s Teetzel. Air flow analysis using computational fluid dynamics (CFD) and placing temperature sensors on server inlets are strategies that can give data center managers a detailed picture of thermal conditions in their facility.

Raising the set point is a key goal of Hewlett Packard’s Dynamic Smart Cooling product, which incorporates CFD, a sensor network, and a central server to monitor and adjust cooling. A similar approach is offered by DegreeC’s AdaptivCool solution.

How much money can you save by raising the cooling set point in the data center? Microsoft (MSFT) wanted to find out, and tested the impact of slightly higher temperatures in its Silicon Valley data center. “We raised the floor temperature two to four degrees, and saved $250,000 in annual energy costs,” said Don Denning, Critical Facilities Manager at Lee Technologies, which worked with Microsoft on the project.

HP sees even larger gains, and hopes to save as much as $8 million annually through a data center consolidation into brand new data centers using Dynamic Smart Cooling.

How high can you set the temperature without damaging the equipment? The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) recommends an inlet temperature range of 68 to 77 degrees, but has explored expanding that range.
 
Intel recently conducted a 10-month test to evaluate the impact of using only outside air (also known as air-side economization) to cool a high-density data center in New Mexico, where the temperature ranged from 64 degrees to as high as 92 degrees. Intel said it found “no consistent increase” in failure rates due to the greater variation in temperature and humidity. “This suggests that existing assumptions about the need to closely regulate these factors bear further scrutiny,” Intel concluded.

Some data center managers warn that running equipment near the high end of the manufacturers’ suggested range for equipment could void warranties with equipment vendors. Another major concern is what happens in the event of a cooling failure, when a lower set point could buy a few additional minutes of recovery time before the room heat reaches unacceptable levels. Thermal conditions in a high-density data center change quickly, as illustrated earlier this year at Hosting365, where temperatures rose to 100 degrees within 15 of chillers going offline.

Raising server inlet temperatures doesn’t by itself guarantee better energy efficiency, according to an IEEE study at another Intel data center in Hillsboro, Oregon, which urged data center designers to closely examine the operation of all systems in the facility.

“A thermodynamic analysis clearly indicates that increasing the temperature … will give an efficiency gain to the heat removal from the system,” the IEEE notes in its abstract. “Unfortunately the simple model does not capture all of the components of the overall system and may lead to an erroneous conclusion. In fact, increasing the ambient temperature can lead to an increase in power usage of some components and systems in the data center as temperature goes up. The overall room energy use may only go down marginally or may even go up at warmer temperatures.”

“There’s diminishing returns when you (raise the temperature) beyond a certain amount, because now your cooling systems are working harder to keep up with the increased fan speed on the server,” said Dean Nelson, Sun’s Senior Director of Global Lab and Data Center Design, in a recent video. “You’ve got to find that sweet spot. But it’s not where we are today. It’s not 65. I think it’s probably around 80 to 85.”

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

23 Comments

  1. Bruno

    ASHRAE has recently published a paper with extended "Recommended Environmental Envelope" Guidelines, with drybulb temperatures between 64.4°F and 80.6°F, together with some broader humidity ranges. Visit http://tc99.ashraetcs.org/ for more info.

  2. Mark

    With these kinds of temperatures, you'd damn well better be running some kind of network monitoring solution like Nagios or Groundwork, along with sensors on each server to alert you of any problems.

  3. NVG

    the thing that is not mentioned in this article is the fact that you need all the air to be delivered to the heat load to dial up units. airflow management will be important.

  4. NVG: Sorry if I wasn't clear on this ... the point of the airflow analysis (mentioned in the fourth paragraph) would be to ensure that the air is reaching the servers. Obviously, if the flow of cool air to the top of the racks is marginal to begin with, you'll have a problem if you raise the set point.

  5. the thing that is not mentioned in this article is the fact that you need all the air to be delivered to the heat load to dial up units. airflow management will be important.

  6. Rico

    Temperature rise across the servier is typically 20F. You still have to cool the air 20F. Whether it is going from 70F to 90F back to 70F or 80F to 100F back to 80F. You are still using the same amount of energy.

  7. In our office, we try to pump the heat out of the data room and into our office area during the winter. We monitor the airflow, temperature and humidity with gear from http://www.Ravica.com. We have security camera's integrated with the system as well.