Will Server Warranties Get Hotter, Too?

The higher temperature settings supported by Rackable’s new CloudRack C2 enclosure generated quite a bit of discussion last week at Slashdot and elsewhere. The CloudRack C2 can operate in environments as hot as 104 degrees, offering customers the option of saving energy costs by raising the temperature in their data center.

The aggressive new approach to data center temperature has implications for equipment vendors, as noted by James Hamilton. “The best way to make cooling more efficient is to stop doing so much of it,” he writes. “I’ve been asking all server producers ,including Rackable, to commit to full warranty coverage for servers operating with 35C (95F) inlet temperatures. Some think I’m nuts, but a few innovators like Rackable and Dell fully understand the savings possible. Higher data center temperatures conserve energy and reduce costs. It’s good for the industry and good for the environment. To fully realize these industry-wide savings we need all data center IT equipment certified for high temperature operations particularily top of rack and aggregation switches.”

Google, which favors running data centers hotter than 80 degrees, makes its own servers.  Google’s practices on data center temperature have prompted discussions with Intel, according to The Register, which says Google has asked Intel to certify its chips to operate at temperatures five degrees warmer than its standard specs. Intel has run testbeds using outside air to cool servers in the data center – a technique known as air-side economization. Intel found negligible differences in equipment failure at temperatures as high as 92 degrees, leading it to conclude that “existing assumptions about the need to closely regulate (heat and humidity) bear further scrutiny.”

A wider temperature range in server warranties would likely prompt more data center managers to experiment with warmer thermostat settings. Is it likely to happen?

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. As you point out, it's not just server warranties. It does me little good if Dell warrants my saerver operating at 95F if my networking and storage products still insist on 80F, unless I can segregate my data center to keep servers away from everything else, which is pretty tough to do. First step would just be to get data centers to operate at the ASHRAE recommended levels, which many don't.

  2. Network switches also present design challenges because they typically use side-to-side airflow rather than the front-to-back design used by most servers. TechTarget recently had a story about getting data center airflow to work for both types of equipment:

  3. Jeff

    What will this do for redundancy planning? It's true that today, servers can run with a much higher inlet temperature than what's specified. With these new high-temp servers, will the difference between the hottest specified temperature and the actual failure temperature be the same as it is today? If not, there will be increased need for cooling redundancy because you are running a "seat of the pants" configuration where just a few degrees of overheat will have your servers melting down. The one nice thing about running conventional servers at conventional temperatures is that you have some wiggle room when air handlers fail.