When it comes to data center efficiency, Yahoo has maintained a lower profile than rivals Google and Microsoft. But the Yahoo team is building a compelling data center story of its own, with innovations in cooling design and energy efficiency ratings approaching the best that Google has achieved.
Yahoo’s Adam Bechtel began telling the story yesterday at the O’Reilly Velocity 2009 conference in San Jose, Calif. Bechtel, the chief architect of Yahoo’s data center operations, shared details of a patented cold-aisle containment system that integrates an overhead cooling module, building the air conditioning units into the top of a “podule” of cabinets packed with servers.
That design has helped Yahoo lower its Power Usage Effectiveness (PUE) to 1.21, according to Bechtel, just a hair shy of the best numbers disclosed by Google and a slightly better than the lowest PUE reported by Microsoft. The PUE metric (PDF) compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion.
Costs Driving Innovation
Bechtel notes that although PUE is a useful benchmark, Yahoo’s focus on efficiency was driven by the bottom line. “We were spending obscene amounts on infrastructure,” Bechtel recalled. “Our power consumption was doubling every 10 months, and that was just a shocker. We started to look at energy consumption in a very different way.”
Those cost issues convinced Yahoo to begin building its own data centers. Bechtel said Yahoo followed the lead of aluminum smelting plants and sought out hydroelectric power from dams on the Columbia River. Yahoo’s plans to build a data center near Quincy, Washington first surfaced in November 2005. While Microsoft has touted its first-mover status in building a major data center in central Washington, it wasn’t until January 2006 – two months after Yahoo’s plans became public – that Microsoft bought land in Quincy.
“We started doing greenfield data center builds – literally,” said Bechtel. “We bought alfalfa fields and built data centers from the ground up.”
Pod + Cooling Modules
That ground-up design included a cooling system featuring a cooling module mounted on top of row of cabinets enclosed in a cold-aisle containment system. This podule system – a combination of pods and cooling modules – was patented last year. The design uses water-based cooling coils inside the modules to cool warm air from the data center. The cold air descends into the enclosed cold-aisle and is drawn through the equipment by server fans.
This approach allows Yahoo to either use recirculated air or mix in fresh air from outside the data center (air-side economization). The combination of free cooling and the isolation of cold and warm air creates an efficient system that eliminates the need for a raised floor and perimeter computer room air conditioning units (CRACs). Bechtel said the design allowed Yahoo to lower its PUE to 1.21.
Yahoo’s Data Center Container
Bechtel also discussed Yahoo’s use of a data center container to support the M45 Supercomputing Project with Carnegie Mellon University (which DCK reported back in April 2008). Bechtel said the 4,000-processor Hadoop cluster in the Rackable container was a 0.1 megawatt system, which has since been dwarfed by Yahoo Hadoop clusters now using 2 megawatts of power in a podule installation in the data center. Bechtel said Yahoo is now assembling 4 megawatt Hadoop clusters, which could incorporate as many as 20,000 servers.
“We’re working on some stuff that’s a lot larger than this,” said Bechtel, who said Yahoo’s upcoming innovations will also lower its PUE efficiency even further.