How Google Cools Its Armada of Servers

19 comments

Here’s a rare look inside the hot aisle of a Google data center. The exhaust fans on the rear of the servers direct sever exhaust heat into the enclosed area. Chilled-water cooling coils, seen at the top of the enclosure, cool the air as it ascends. The silver piping visible on the left-hand side of the photo, which carry water to and from cooling towers. Click for a larger version of the image. (Photo: Connie Zhou)

Google has shared some of its best practices over the years, but other parts of its data center operations have remained under wraps. One of the best-kept secrets has been the details of its cooling system, which allows Google to pack tens of thousands of servers into racks.

Google Senior Director of Data Centers Joe Kava discussed the design of its cooling system with Data Center Knowledge in connection with the company’s publication of a photo gallery and a StreetView app that provide Google’s millions of users with a look inside its data centers. If you’re one of those data center managers who worries about having water in close proximity to the IT equipment, what you’re about to read might make you nervous.

In Google’s data centers, the entire room serves as the cold aisle. There’s a raised floor, but no perforated tiles. All the cooling magic happens in enclosed hot aisles, framed on either side by rows of racks. Cooling coils using chilled water serve as the “ceiling” for these hot aisles, which also house large stainless-steel pipes that carry water to and from cooling towers housed in the building’s equipment yard.

Following the Airflow

Here’s how the airflow works: The temperature in the data center is maintained at 80 degrees, somewhat warmer than in most data centers. That 80-degree air enters the server, inlet and passes across the components, becoming warmer as it removes the heat. Fans in the rear of the chassis guide the air into an enclosed hot aisle, which reaches 120 degrees as hot air enters from rows of racks on either side. As the hot air rises to the top of the chamber, it passes through the cooling coil and is cooled to room temperature, and then exhausted through the top of the enclosure. The flexible piping connects to the cooling coil at the top of the hot aisle and descends through an opening in the floor and runs under the raised floor.

Despite the long history of water-cooled IT equipment, which dates to IBM mainframes, some managers of modern data centers are wary of having water piping adjacent to servers and storage gear. Many vendors of in-row cooling units, which sit within a row of cabinets, offer the option of using either refrigerant or cooled water.

Kava is clearly comfortable with Google’s methodology, and says the design incorporates leak detection and fail-safes to address piping failures.

“If we had a leak in the coils, the water would drip straight down and into our raised floor,” said Kava, who said pinhole leaks and burst coils could be slightly more problematic. “We have a lot of history and experience with this design, and we’ve never had a major leak like that.”

Focused on Efficiency, Not Frills

Kava says the design – known as close-coupled cooling – is significantly more efficient than facilities that use a ceiling plenum to return the hot exhaust air to computer room air conditioners (CRACs) housed around the perimeter of the raised floor area. “The whole system is inefficient because the hot air is moved across a long distance as it travels to the CRACs,” said Kava.

Nearly all facets of Google’s design are focused on efficiency – and that doesn’t just mean efficiency with power. It also includes efficiency with cost. An example: Google’s use of plastic curtains instead of rigid containment systems to manage the airflow in its networking rooms.

Gogole’s custom servers also have a bare bones look and feel, with components exposed for easy access as they slide in and out of racks. This provides easy access for admins who need to replace components, but also avoids the cost of cosmetic trappings common to OEM servers.

“When you pull out one of our server trays, it’s like a cookie sheet with a couple of sides,” said Kava. “It’s inexpensive. We’re not going for fancy covers or sheet metal skins.”

Kava said Google-watchers can expect more information on the company’s best practices in coming weeks. “Our intention is to follow this up with a series of blogs highlighting our technology,” he said.

The server area in Google’s data center in Mayes County, Oklahoma provides a look at Google’s no-frills servers. (Photo: Connie Zhou for Google)

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

19 Comments

  1. A statement in this article concerns me and may need some clarification for the reading audience. It states that, "As the hot air rises to the top of the chamber, it passes through the cooling coil and is cooled to room temperature, and then exhausted through the top of the enclosure." If the air is truly "exhausted" out of the enclosure and not used again, there is no reason to cool it unless it is recirculated. And, even if the air is returned to some other part of the datacenter it would be much more efficient to use outside air (at temperatures of well below 120F) than to cooling and return the server exhaust. And, Google's servers should be able to accept outside air most of the year. Can you have Joe explain where the exhause air is headed and why he wants to cool it? Thanx! Bruce

  2. Brendan

    It would cost a fortune to filter, dehumidify, and condition all of that outside air. Maintenance would be huge. It should be cheaper to use the chillers and outside cooling towers which already use ambient temperatures to cool the water anyway. During cold winters, chiller costs should be very minimal. I suspect the cool exhaust air from the top of the cooling coil simply flows back into the room, thus into the server intakes again.

  3. Peter

    @Bruce... by "exhausted" they mean outside of the enclosed hot aisle but still within the data centre. This air has controlled humidity, and is filtered for particulates. Filtering and de-humidifying the huge volume of air required to use direct free cooling is expensive. At the other end of the cooling loop, the hot water, having been warmed in the hot aisle, can be recycled using so called ''free cooling" or outside air. In some case, they may even free cool using adacent waterways (using fresh water to cool the hot water) but again, the process is likely a closed loop where none of the filtered (perhaps distilled?) water leaves the circulation loop.

  4. Martyn Smith

    Exhausting the air completely would mean that the datacentre would experience a large intake of new air. This isn't a good thing as, because of varying humidity of incoming air, that there would have to be extra controls on incoming air. Humidity and static charge in the air can add towards electrical component failure. Normally you would reuse the air after cooling so that humidity was controlled.

  5. Advert

    They re-use the air because it's clean. If they were to bring in new air, they would have to filter it, etc, which would require a lot of maintenance versus simply re-cooling the old air. At least, that's what I think. No one likes dusty data centers!

  6. I initially had this same thought. But if you were truly venting the server exhaust out of the building, you'd have to draw in outdoor air to replace it. This would be cooler than the "hot" air coming out of the servers, but could also be damp, dusty, etc.. So presumably cooling the server exhaust is cheaper than conditioning the outdoor air.

  7. Shane

    @Bruce, By enclosure they are referring to the racks themselves. They take the hot air. Cool it the exhaust it out from the hot air aisle and back into the room

  8. evanh

    Enclosure, meaning the hot aisle and server racks. The cooled air is exhausted into the room (cold aisle).

  9. Google and other big player need to design server which is more efficient. because now many data and more user. if they design it good they might reduce the operational and maintenance cost

  10. Leslie Satenstein

    Regarding the closed loop and the questions from Bruce. I reread the article, and it appears that the cooling is done for each rack, with coils at the top of the rack. As the air moves towards the ceiling and exits the coils, it is supposedly at 80degrees, and becomes part of the rooms air supply. There is no exhaust to outside the area, and no external fresh air intake. Pumping air out and pulling in air to the datacentre, as I understood the article, would add a lot of cost. Perhaps not if the outside air was at cool enough to make the heat exchange cost effective. I may be wrong, but the same in-room air has already been filtered to be dust free. Another savings in having air pass through filters unnecessarily,.

  11. Bruce, I think what may be happening here, is that the air is recirculated back into the room (In Google’s data centers, the entire room serves as the cold aisle.) If they were to draw in fresh air they would more than likely have to deal with humidity concerns and end up having to use some sort of air conditioning system to pull moisture out of the air. As it appears now, they probably have minimal moisture/humidity concerns. Not that it was mentioned in the article, but it is probably a safe assumption they have taken that into account with this design.

  12. Interesting cooling design. I am in constant awe of Google's computing power when thinking about the complexity of building and maintain an index of trillions of entries, searching the millions and in some cases, billions of entries that contain those keywords, and being able to return the projected top 10 (or in some cases 7) most relevant results in about half a second. How Google integrates all this computing power to support this kind of performance is fascinating. Can you say inspiration for IBMs Watson in Final Jeopary?:-) Bare bones is good given Google's massive scale (and slip in profit margins reported last week means every penny counts, now more than ever). Form over function rules the day in this kind of computational space. We all are used to Google and what it does for 100s of millions of people using each day, but sometimes its nice to look at the "man" behind the curtain and understand at least a small piece of the magic. I am looking forward to the follow-up posts from Google. Thanks for sharing.

  13. ChrisB

    Google often displays "old" designs in these reveals. Water heat exchangers are also necessary at converted sites, one of which came with a bonus lake and piping which made it very cheap. My guess is that some sites use free cooling, Google Engineers have wryly hinted at such.

  14. sixty

    Using outside air seems like a great idea until you have to deal with the dust, debris, animals (in some circumstances), and moisture that it brings with it. Closed systems really are best.

  15. I heard that Google Centers are held underneath of seas and the whole package seems to be cooling by the shear power of the sea water. It's so interesting to see what empire is behind the Google datacenters.

  16. Almeyer

    I would be interested to know if they still use conventional fans to move the air. Technically the server case fans could move the air, but i dont think they would be able to handle the static pressure of pushing the air through the cooling coils. Does this mean they still have some sort of air handling fan bank to recirculate the air? I would have to believe they do. Also, is the water cooled by conventional chillers, or are they direct free cooling through a cooling tower? I live by the Oklahoma site and driven past many times. Too bad they dont offer tours...

  17. I wonder how they address side venting equipment.

  18. fadly

    i just can say '''wow'''