Intel: Servers Do Fine With Outside Air

25 comments

Do servers really need a cool, sterile environment to be reliable? New research from Intel suggests that in favorable climates, servers may perform well with almost no management of the environment, creating huge savings in power and cooling with negligible equipment failure.

Intel’s findings are detailed in a new white paper reviewing a proof-of-concept using outside air to cool servers in the data center – a technique known as air-side economization. Intel conducted a 10-month test to evaluate the impact of using only outside air to cool a high-density data center, even as temperatures ranged between 64 and 92 degrees and the servers were covered with dust.

Intel’s result: “We observed no consistent increase in server failure rates as a result of the greater variation in temperature and humidity, and the decrease in air quality,” Intel’s Don Atwood and John Miner write in their white paper. “This suggests that existing assumptions about the need to closely regulate these factors bear further scrutiny.”

Intel set up a proof-of-concept using 900 production servers in a 1,000 square foot trailer in New Mexico, which it divided into two equal sections using low-cost direct-expansion (DX) air conditioning equipment. Recirculated air was used to cool servers in one half of the facility, while the other used air-side economization, expelling all hot waste air outside the data center, and drawing in exterior air to cool the servers. It ran the experiment over a 10-month period, from October 2007 to August 2008.

The temperature of the outside air ranged between 64 and 92 degrees, and Intel made no attempt to control humidity, and applied only minimal filtering for particulates, using “a standard household air filter that removed only large particles from the incoming air but permitted fine dust to pass through.” As a result, humidity in the data center ranged from 4 percent to more than 90 percent, and the servers became covered with a fine layer of dust.

Despite the dust and variation in humidity and temperature, the failure rate in the test area using air-side economizers was 4.46 percent, not much different from the 3.83 percent failure rate in Intel’s main data center at the site over the same period. Interestingly, the trailer compartment with recirculated DX cooling had the lowest failure rate at just 2.45 percent, even lower than Intel’s main data center.

While the reliability trade-off in the proof-of-concept was small, the energy benefit was huge. Using air-side economizers resulted in a 74 percent decrease in power consumption compared to recirculated air. Based on temperatures in its New Mexico test locale, Intel estimates that it could use economization 91 percent of the time, translating into potential savings of 3,500 kilowatt hours.

That works out to considerable savings in larger data centers. That energy savings template could create annual savings of $143,000 for a small 500 kilowatt data center, or annual savings of $2.87 million for a 10 megawatt data center.

There are limitations to Intel’s experiment, which would work primarily in areas with warm temperatures and low humidity (we’ve previously noted the advantages this climate profile has provided for Switch Communications in Las Vegas). But the results of Intel’s research provide some meainingful new data points on the reliability of servers in a broader band of heat and humidity conditions.

There’s been much discussion in recent years of raising the cooling set point in data centers. Sun Microsystems say data center managers can save 4 percent in energy costs for every degree of upward change in the set point. The HVAC industry group ASHRAE has also examined the issue, widening its recommendations on operating ranges for data centers. In practice, many data center managers are wary of trying expanded ranges of heat and humidity in production facilities.   

Intel says it will likely repeat the proof-of-concept with a 1 megawatt data center, and could include air-side economizers in future data center designs.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

25 Comments

  1. Paul M

    my only concern would be cleaning the air on entry, without ending up with filters which can get clogged up!

  2. Intel's research results would be even more meaningful with a characterization of the servers in the test group as to manufacturer and age (generation). An important factor is the temperature inside the server on the chasis, so that designs that result in a minimum internal temperature rise will be more tolerant of inlet temperature - and worth knowing which are which!

  3. Hi JH. The servers are described as single-core, dual-socket, Intel-based machines that are about two years old.

  4. We'll be doing the same thing in the next two weeks. http://ae.redrocksdatacenter.com/ for more info.

  5. Fred F

    "filters which can get clogged up!" Well clean them! Or replace them! Or use filterless air cleaners! Problem solved!

  6. Bill G

    I really question the testing that was done because the data presented does not show the actual server power density. I have worked on data centers with racks containing dual Xeon processors with a power draw of 17kW per server cabinet. The temperature rise across these cabinets is nearly 30 degrees F. If your outside air is 92, then the exhaust air temperature is 122! I know that most electronic components are only rated to 104 - I would expect to see massive failures in cabinets with this power density or more. That's really "Fine" dust - if it insulates the heat sink it will reduce the heat flux and add to the failures. Maybe they were lucky with the thermal-conductive dust in NM. Note that 3500kwh sounds like a big number, but where I live power is less than $0.06/kwh = $210/mo or year? What kind of a white paper is this? I am very familiar with ecomomizer cooling, but with these temperature extremes I want to see a whole lot more data before I would try this on a 10MW data center! Apparently you are not saving infrastructure, just energy, so the complex will cost the same either way. Operating costs are expense, so most companies don't get too excited about this approach.

  7. John H

    You know what they say about lies, damn, lies and statistics...Perhaps we should add "editorial characterization" to that list. The failure rate of the servers in the air economizer side of things was 82% higher than the failure rate of the server "control group". That's practically double the failure rate! Yet, the white paper authors characterize this difference as "a minimal difference between the 4.46 percent failure rate in the economizer compartment and the 3.83 percent failure rate in our main data center over the same period". Furthermore, the test was only run for 10 months. If, as I suspect, the higher failure rate was partly due to dust buildup degrading the performance of the heat sinks, we could expect the failure rate to increase over time. Also, by changing so many variables at once, this experiment does not give us any useful information about which of the variables - higher temperatures, more variable temperature, dust, higher humidity, lower humidity or or more variable humidity, were the primary contributors to the higher failure rate - or if it was a little of each. Bottom line - it's an interesting and thought provoking experiment, but I'm not going to rush out and recommend this kind of design to my clients until I have more information about longer term results and about which of the variables are the biggest contributors to the high failure rates.

  8. I think the benefit being illustrated isn't that anyone should go out and duplicate Intel's test conditions for their production data center. The dust buildup is an example. I'm sure Intel would filter dust more effectively in a production environment using air-side economizers, but might still realize energy gains by allowing a broader temperature in the data center.

  9. Fridge

    I think that rather than looking at the value of using outside air for cooling, I am interested in the potential of continuing to use recycled/cooled inside air but running at higher temps in the data center.

  10. Why even bother? Interconnect water cooled, closed loop and sealed micro containers in an "warehouse like" building shell and stop mucking about with a flawed system. Chuck it. Hot isle/cold isle will not survive Moore's law. Set your air cooled waterchiller outside the building,

  11. I am just starting research on outside air and the possible effects on data center equipment. This article while interesting does not address my major concern. That would be hygroscopic dusts. It is likely the dusts in NM are not of the variety found in costal areas (where most of the world population lives). My research has found data to support a level of 50% of dust particles are sea salt in costal areas. Depending on the season, the numbers can reach near 100% depending on location wind speed and direction. Because the salt particles are the result of dried sea spray, the sizes can be very small ( 0.1 to 5 microns) and therefore difficult to filter. If this layer of fine dust in NM was 50% salt I wonder how well these servers would work at 90% RH? The same cuold be true for corrosion and related failures, if this data center was run for years in a much more humid climate. I am working on a test to use powered salt as part of a dust mixture to test servers. It may be possible to use mostly outside air for cooling but I would want to make sure my servers could handle it first!

  12. Data center operators are being asked to take several leaps of faith when adopting the outside air concept with regard to particulate filtration, corrosive gases, operating temperature, humidity control, and space pressurization. Taking all these risks at one time makes it difficult to identify which one may have caused you to experience a failure. 1) gaseous contaminates with corrosive properties cannot be easily filtered. You may or may not be in a high risk area. Also, if you have on site generators that come on line, you will likely want to control economizers to the off position to avoid diesel exhaust being pulled in your space through the outside air louvers ait the data center air handler. 2.) Humidifying the stream of cold winter air requires humidification equipment with much higher capacity to avoid static discharge. Humidification at this scale is not cheap. 3.) Space pressurization control is difficult when moving large volumes of air in and out of the building. Pulling the data center under a negative pressure could result in untreated and unfiltered air pulled in from adjoining spaces or cracks in the building. 4.) Starting off at a high operating temperature leaves little room for a cooling flywheel effect when mechanical systems temporarily go down. You go from 92 to 130 F much quicker than 65 to 130 F when your high density servers are waiting for the generators to come on line to restore the power to the cooling equipment . At the very least, consider the fans and pumps into your UPS load calculation if taking this approach for higher room temperature operation If we are going to ask our clients to try out the outside air economizer concept they should be made fully aware of the risks they might be taking to gain the benefits of lower energy costs.

  13. This is always an interesting idea and thinking outside the box is definitely the way to go, especially when cost cutting is such a huge draw. My company has been using this technique since August 2008. We started with our internal datacenter and have been pushing it whenever we do a proposal for a datacenter construction proposal (along with one that features actual air conditioning). We haven't had many bites because of people afraid of the "dirty" outside air, but we have had a lot of raised eye brows with clients and peers.

  14. I applaud the research on using outside make up air or air economizers. However, I heard of an anecdote about a series of small data centers that were located near the coast. Economizers were installed in these data centers, but the salt content of the air reportedly caused corrosion problems with the HVAC equipment and the servers. Does anyone have experience with corrosion due to salt in the makeup air?

  15. For your on the fence, we have two rooms using economizers and are looking forward to another Fall and Winter saving $1000s on cooling savings. We're coming up on a year and although we're a small data center, I've had whopping 2 customers lose hard drives in their colo machines... Feel free to contact me if you are still skeptical. We're in Morrison, Colorado which works great for us, not other parts of the country though.

  16. Great comments everyone - the truth is our (Intel’s) experiment here at the time was almost revolutionary but today it’s widely adopted. Due to NDA I can’t say who, where, or how but I walked a new DC today that was 100% free cooling 365 day per year and it was at a very large scale that makes my test of 900 servers look small. If you do it in the right location, it works; it’s not debatable anymore, there are very large scale installations globally and many large companies are not even installing chillers or DX at all but they are using 100% outside air….not in phoenix or hot climates of coarse 

  17. We have been sucking in outside air for cooling for 15 years..using a custom designed system (designed by me). When we first set it up people said we were nuts. Even lost a few customers over it. Now google and Facebook are doing it. I feel like calling everyone that said we were nuts back then and asking how they feel now. Vastnet staff

  18. William

    I am amazed at the conservative reaction to this article. Why haven't the fools who are afraid of the outside air run out of money yet? There are no issues here that have not been dealt with exhaustively - if you are still using a sealed box for your data center, you will be charging more and providing less reliable infrastructure than the DC with an economizer. That's foolish and stupid. Live in a poor climate for economization? Move to the west coast.