Scott Noteboom of Yahoo speaks at the 7x24 Exchange conference Wednesday in Boca Raton, Fla.

Yahoo is Ready for A Data Center Revolution

3 comments

Scott Noteboom of Yahoo during his keynote presentation Wednesday at the 7x24 Exchange conference in Boca Raton, Fla.

Scott Noteboom , the head of data center operations at Yahoo, sees 2010 as a moment of historic opportunity for the data center industry. As growing Internet adoption requires infrastructure everywhere, he says data center builders would do well to note the early history of the automobile industry.

“There are times during revolutions when you can do new things,” said Noteboom, whose great-great-grandfather was an auto worker in the industry’s early years. As the auto business became more crowded, competition introduced an unexpected market dynamic.

“The innovators became very conservative in nature,” said Noteboom, the keynote speaker Wednesday at the 7×24 Exchange spring conference in Boca Raton, Fla. “When things get competitive, margins will go down and only the efficient, who have the ability to innovate, will survive. Data centers have been operating for years in this bubble, almost like a castle. The data center can no longer be a conservative castle; it has to be a commodity factory. There are things we need to do to remain competitive.”

Testing Boundaries
At the top of the list: question assumptions and push boundaries in search of improved efficiency and competitiveness. Yahoo’s data center infrastructure has undergone a five-year transformation in which Noteboom and the Yahoo team have driven inefficiencies out of their design.

Yahoo has progressively refined its data center design, setting aside conventional wisdom about humidity and temperature in data centers, and committing to fresh air cooling (air-side economization). That laid the groundwork for a site selection process that could create additional savings by building in locations with aggressive tax incentives, renewable utility power, and a climate that supports year-round free cooling.

The Yahoo Computing Coop
The end result was the new Yahoo data center in Lockport, N.Y., a suburb of Buffalo. Lockport features the first  implementation of the Yahoo Computing Coop (YCC), which operates with no chillers, and will require water for only a handful of days each year. The YCC units are prefabricated metal structures measuring about 120 feet long by 60 feet wide. Each of the three coops has louvers built into the side  to allow cool air to enter the computing area, allowing the entire building to function as an air handler.

Yahoo projects that the new facility will operate at a Power Usage Effectiveness (PUE) of 1.08, placing it among the most efficient in the industry. Best of all, the computing coop design allows Yahoo to build new data center space at a cost of just $5 million per megawatt, and complete construction in six to eight months.

Origin Stories Lacking
It’s been a long journey from third-party colocation space to the Yahoo Computing Coop. The journey started by addressing the accepted range of 45 to 55 percent humidification for data center space.  “Where did this belief come from?” Noteboom wondered.

Noteboom went searching for references, and says he found only a few academic scenarios warning of humidity issues. For practical arguments for tightly controlled humidity for computer rooms, Noteboom said he had to go back to the era of punch-card data entry, when concerns focused on whether excess humidity might wilt the punch cards and make them difficult to process properly.

“In 2005 we built a data center without humidification control,” said Noteboom. “It was filled with 25,000 servers. We were worried about static, and had all these protocols for handling equipment. We don’t have them anymore. We’ve operated through an entire lifecycle in this data center, and we’ve seen no impact.”

Stress Testing for Servers
The YCC design being deployed in Lockport will further test assumptions about temperature and humidity. So Noteboom and the Yahoo team set up a test site in San Jose, Calif. and subjected servers to a wide range of environmental conditions.

“We have tortured the hell out of servers at different temperatures,” said Noteboom. “We tested blasting rain into servers, using a mister on them, and even dry ice for cold conditions.”

The testing has affirmed Noteboom’s belief in the new design for the Lockport facility, as well as the need to continue to push boundaries and test conventional wisdom in  pursuit of faster, cheaper, better data centers.

“I think you’re going to find that hardware will become more resilient,” he said. “In coming years, I predict that we’ll be able to build these kind of data centers everywhere. I believe we’re going to see an increased move to data center environments where things run fine without cooling.”

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

3 Comments

  1. DC1

    This is an interesting challenge of past practices. The humidity standards for controlled environments were also drawn from semiconductor production facilities where humidity control was required for the proper function of the photoresist/lithography processes. It also served to mitigate ESD damage problems caused by air flow friction on equipment surfaces and devices in the fabs. By itself, humidly control did not eliminate all severe ESD damage and ceiling ionizers were also required to control ESD. The equipment and materials are different between silicon chip factories and server farms, no exposed silicon nitride, teflon or reticles on the farm. However, there is air flow, resistive materials, and the same physics. This position by Yahoo strikes me of a meat packing plant announcing that no one has been sickened by relaxing their sanitary standards

  2. I want to thank Scott for reinforcing the message! As you all know this is one of things I have been pushing in the industry and used the tent city work from a couple of year ago that servers are much more robust than most people think. As a result, our strategy is also to agressively use outside air and running servers as high as 95F and we are not going to stop there. As an ex server designer, I argued in the Intel Great debates a year or two ago, that we should be targeting 50 C (122F) servers so we can economize anywhere with no water usage. I just reiterated that fact again in a ComputerWorld Article http://www.computerworld.com/s/article/349433/Data_Center_Density_Hits_the_Wall My kudos go to Scott!