Facebook Seeks Patent on Cooling Automation

The server room at the new Facebook data center in Prinevlle, Oregon, featuring a hot aisle containment system. (Photo credit: Alan Brandt)

Engineers from Facebook are seeking a patent on a data center cooling system that uses a load balancer to automatically shift workloads among racks of servers. The system can also manage fans that adjust the volume of air in either the hot aisle or cold aisle.

The system described by Facebook is one of several approaches to designing intelligent cooling systems in which servers, sensors and cooling equipment can “talk” to one another to provide advanced management of high-density racks of IT equipment. Facebook’s technology targets a particular challenge in large data centers – the tendency for on-board server fans to fight with row-level cooling systems as the temperature rises.

Raising the temperature in the data center can save big money on power costs. In recent years, industry research has shown that servers can perform effectively at temperatures above 80 degrees F, well above the average ranges in the low 70s at which most data centers are maintained.

But if you nudge the thermostat too high, the energy savings can evaporate in a flurry of fan activity. Several studies in which servers were tested at higher temperatures discovered that on-board server fans kicked on between 77 and 80 degrees. This fan activity consumed energy that offset the gains from using less room-level cooling.

Companies like Facebook and Microsoft have sought to address this by reducing or eliminating on-board server fans. Ths approach only works in a design in which airflow is closely managed and monitored, typically by using aisle containment and temperature sensors that provide greater control over conditions in the racks.

Facebook applied these techniques in several data center retrofits of its leased data center space in 2010, which involved detailed analysis of fan speeds. The company worked with its server vendor to adjust the algorithm driving the fan speeds.

The patent application by Facebook engineers Amir Michael and Michael Palecznywas submitted in 2010, and recently made public. Most cooling automation systems focus on adjusting the airflow being provided to the racks of servers. But the Facebook patent filing  describes the use of a load balancer that can redistribute the workload across servers to shift compute activity away from “hot spots” inside racks. The Facebook system also can adjust fans that manage airflow entering the cold aisle and exiting the hot aisle, providing multiple ways to adjust for changing thermal conditions.

The submission builds upon the techniques described in a 2009 patent submission by members of Facebook’s data center team, which focused on designs that would allow servers to operate without fans, including modifications inside the server chassis to improve airflow and provide more cool air to components.

The Facebook patents discuss reducing the use of fans by only activating them in a certain temperature range, or going without fans altogether. The Open Compute designs released by Facebook in April 2011 feature server chassis with four 60 millimeter fans at the rear of the  server.  The use of a 1.5U chassis allows the use of the 60 millimeter fans, which are more efficient than 40 milimeter fans seen in many 1U chassis.

Facebook is hardly alone in seeking to solve these problems. Over the past five years, a number of data center researchers and vendors have focused on automated cooling systems that can adjust to temperature and pressure changes in the server environment. Here are a few examples:

  • In 2007, HP introduced Dynamic Smart Cooling, a system that deployed sensors throughout the data center, which communicate with the AC systems. HP used the system in its data centers, but the technology was lightly adopted by customers.
  • In 2008, Opengate Data Sytems introduced a heat containment system for data center racks, equipped with modules that monitor air pressure in a server cabinet and can adjust fan activity based on pressure within the cabinet.
  • In 2009, Lawrence Berkeley Labs and Intel developed a proof-of-concept that integrated a sensor network into building management systems, which could then adjust the output of cooling systems in response to changes in server temperature and pressure readings at the top and bottom of each rack.
  • In 2010, Brocade opened a new data center on its San Jose campus with a network of 1,500 temperature sensors tied into building management system, which can auto-adjust cooling as workloads shift.
  • In 2010, SynapSense introduced Adaptive Control, software that can dynamically adjust the temperature set points and fan speed in computer room air handlers (CRAHs) based on sensor readings of server inlet temperatures and sub-floor air pressures.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. datacenternerd

    I really hope they do not go through with this. For as open as they have been in the datacenter design space this is archaic and I hope VmWare Vmotion qualifies as prior art.

  2. Thank you Rich for the recognition of Opengate Data Systems as a cost effective airflow solution

  3. In 2005 my company, AdaptivCool was awarded US Patent 6881142 for networked intelligent fans in data center floors and ceilings. We have since expanded this technology to controlling CRAC units as well as full data center environmental monitoring.

  4. It will be interesting to follow the patent application. People have spoken about doing this for at least 5 years, and certainly VMWare's DPM (distributed power manager) has some parts of the solution, as have dynamic cooling systems from Synapsense, HP etc. But moving IT workloads in dynamic response to (or anticipation) of overheating is still very leading edge, even if the concepts have been floating around for some time. Coupled with the ideas in "follow the moon" to avoid electricity pricing, this could be another step towards completely chiller free datacenters.

  5. Not sure if this really needs a patent, as I think Dell and other companies have some very similar systems that take into account the load of the servers. I guess if it is really revolutionary it would be worth it. Still, if it helps the environment by cutting down on wasted energy by directing airflow where it is needed then I am all for it.