The building blocks for Microsoft’s data center of the future can be assembled in four days, by one person. The two data center containers, known as IT PACs (short for pre-assembled components) proof of concept, are built entirely from aluminum. The first two proof of concept units use residential garden hoses for their water hookups.
“Challenge everything you know about a traditional data center,” said Kevin Timmons, who heads Microsoft’s Global Foundation Services, in describing the company’s approach to building new data centers. “From the walls to the roof to where it needs to be built, challenge everything.”
The Just-in-Time Data Center
“View your data centers as a traditional manufacturing supply chain,” said Timmons. “We’ve got PACS coming in from Singapore, others from Italy and others from the United States.” Those building blocks – which will include containers for electrical and mechanical support equipment as well as servers and storage – allow data centers to be assembled on a just-in-time basis. Once a site is selected and a steel frame deployed, the modular approach allows data center capacity to be deployed quickly, in cost-effective increments.
Microsoft plans to assemble its IT PACs in huge facilities built around a central power spine, with container shelters on either side. Diagrams from Timmons’ presentation depicted facilities with no side walls and a pointed roof with vents at the top, a design that appears similar to the “computing coops” 
being built by Yahoo at the company’s new data center in Lockport, New York.
The future Microsoft data centers will be fully air-cooled, with no mechanical cooling, Timmons said. The key to that approach is Microsoft’s updated container/IT PAC design, which functions as a huge air handler with racks of servers inside. The units are technically classified as air handlers instead of structures, a designation which may prove helpful in deploying capacity quickly.
PUE of 1.06 in Testing
Timmons said the latest container design is proving to be extraordinarily efficient, operating with a Power Usage Efficiency (PUE) of 1.06 in testing. That would rank among the lowest scores reported, below even Google’s published PUEs, which average between 1.1 and 1.2 for most of its facilities.
Running servers at a higher temperature is a contributor to that efficiency. “We’ve started to push our inlet temperatures up to 90 to 95 degrees,” said Timmons. “At that point, it’s really about humidity more than temperature.”
“Free cooling” using fresh air instead of chillers can save enormous amounts of energy, but also usually places limits on site location for data centers. Timmons said climate is a crucial factor in Microsoft’s site location decisions, but indicated that the new container designs may broaden its options.
“I haven’t yet found a place in the world where they won’t work,” he said. “We’re currently running a trial in Southeast Asia in a high-temperature, high-humidity environment, and I’m looking forward to the results.
Timmons, the keynote speaker at today’s DataCenterDynamics New York conference at the New York Hilton, discussed Microsoft’s design innovations for its next generation data center infrastructure, saying the industry is at an “inflection point.”
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2010/03/03/microsofts-timmons-challenge-everything/
URLs in this post:
 “computing coops”: http://www.datacenterknowledge.com/archives/2009/06/30/yahoos-fresh-air-computing-coop/
 Rich Miller: http://www.datacenterknowledge.com/archives/author/richm/
Click here to print.