A diagram of an "air wand" indicating the location of cooling vents in wand, a key feature of a patent application by Google data center engineers.

Google Patent Reveals Data Center Innovations

7 comments

A diagram of an "air wand" indicating the location of cooling vents in wand, a key feature of a patent application by Google data center engineers.

A diagram of an "air wand" indicating the location of cooling vents in the wand, a key feature of a patent application by Google data center engineers.

Google has revealed some of the secret technology inside its mighty data centers, but its engineers are busy cooking up new secrets.

An example: Google is seeking to patent an advanced data center cooling system that provides precision cooling inside racks of servers, automatically adjusting to temperature changes while dramatically reducing the energy required to run chillers.

The cooling design, which could help Google slash the power bill for its servers, reinforces Google’s focus on its data centers as a competitive advantage in its battle with Microsoft and other rivals for leadership in cloud computing. The company has customized much of the operation of its data centers, which serve as the engines powering its massive Internet business. Google builds its own servers and networking switches, and now appears to be customizing the racks that hold them.

Precision Cooling via ‘Air Wands’
The innovative rack cooling design features an adjustable piping system, including “air wands” that provide small amounts of cold air to components within a server tray. The chilled air enters the top of a rack through two vertical standpipes, which branch off into air wands – long, thin pipes lined with vents that release cold air.

The air wands can pivot to target cold air on specific components, or be swung to one side to allow equipment to be removed from the rack. Dampers on each standpipe can open and close to regulate the volume of air flowing into the pipe and air wands, while the vents on each individual air wand can be adjusted to point up or down, allowing for a highly configurable system. (See A Closer Look at Google’s New Cooling Design for a diagram).


Exaflop and Its History

It’s not clear whether Google is already using the cooling system. But the patent application was submitted by Exaflop LLC, whose 2008 patent for a UPS system integrating batteries with server power supplies helped Google achieve 99.9 percent UPS efficiency and record low Power Usage Effectiveness (PUE) scores. The address for Exaflop is 1600 Amphitheatre Parkway in Mountain View, Calif., which is Google’s headquarters. The inventors listed on the patent are Google employees Jimmy Clidaras and Winnie Leung.

The system designed by Clidaras and Leung addresses many of the most vexing challenges in data center energy efficiency. It allows Google to apply small amounts of cold air precisely where it is needed, rather than cooling an entire server room and seeking to steer the airflow into each rack and across the hot server components.

Going Beyond Containers
Google has used data center containers to isolate hot and cold air and gain greater control over airflow to its servers. The new design takes this concept to a more granular level of management. The air wands can apply cool air directly to the “hot spots” inside a server tray, meaning less air is wasted or misdirected in the server room or container. This could allow Google to use a smaller chiller plant in its data centers, saving energy in the process.

Chillers, which are used to refrigerate water for use in data center cooling systems, require a large amount of electricity to operate. With the growing focus on power costs, many data centers are trying to reduce their reliance on chillers.

This has boosted adoption of “free cooling,” the use of fresh air from outside the data center to support the cooling systems. This approach allows data centers to use outside air when the temperature is cool, while falling back on chillers on warmer days. The new design could be used as supplemental cooling in a data center using free cooling, or in facilities located in areas where fresh air cooling isn’t feasible.

Limitations of Free Cooling
Google is operating a chiller-less data center in Belgium, where the climate allows nearly year-round use of free cooling. But this strategy will only work in cooler regions, and Google’s global ambitions may eventually require data centers in hotter climates unsuitable for free cooling.

Google can gain additional control over its cooling system through automated monitoring and management, as the system is designed to respond to changes within the rack as temperatures fluctuate. “The temperature sensor output can be fed to a computer program that triggers air distribution in the event of the board temperature crossing a threshold,” the patent reads. “Each temperature sensor may be connected to a PID control loop with a damper, so the corresponding damper is opened … with an increase in temperature sensed for a particular area.”

Some Secrets Revealed, While Others Incubate
Google’s data center designs were kept secret for many years, consistent with the company’s belief that its data center innovations gave it a competitive advantage. In April Google discussed its data center operations for the first time, joining a growing industry conversation about best practices for energy efficiency.

The company revealed its data center containers, custom server design and on-board UPS, among other innovations. But some industry observers concluded that there was more in the pipeline that Google wasn’t discussing.

“Both the board and the data center designs shown in detail where not Google’s very newest but all were excellent and well worth seeing,” James Hamilton noted at the time. “I like the approach of showing the previous generation technology to the industry while pushing ahead with newer work. This technique allows a company to reap the potential competitive advantages of its R&D investment while at the same time being more open with the previous generation.”

RELATED STORIES:

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

7 Comments

  1. EXCELLENT post, thanks very much. Indeed, it will be nice once this has matured sufficiently so that other organizations can also benefit by investing in these sort of targeted cooling improvements.

  2. Thanks for the update. This approach is not new, and even design firms such as RTKL have explored and patented alternate air delivery methods. In the past, pressure losses and leakage at connections have been problematic; is there an approach that addresses those issues? Also, will the cost increase for having the wands be marginal by comparison?

  3. Here's a link to the RTKL patent that John Peterson references, which describes "an elongated duct defining a passageway for air distribution ... positioned within the internal chamber (of the rack) and has a plurality of adjustable air discharge ports that are in fluid communication with the passageway."

  4. Yang

    Got to this article from another site link. Cooling the hot spot can reduce the cooling air flow, for that the load is identical, the exhaust air temperature will be higher than before. Suppose vent air temperature is constant, the exhaust air temperature should be higher than the air outside room, and exhaust to outside, this is a good point; or the chiller will have same heat load, there is only a benefit from fan power reduction. Water cooling may be more energy saving, with lower pump power, I guess.

  5. Thank you very much. Google data centers has excellent cooling systems. Water cooling method is really intellegent idea.