Can Open Hardware Transform the Data Center?

Is the data center industry on the verge of a revolution in which open source hardware designs transform the process of designing and building data centers? The Open Compute Project is gaining partners, momentum and structure. Yesterday it unveiled a new foundation and board to shepherd the burgeoning movement.

Rich Miller

October 28, 2011

4 Min Read
Data Center Knowledge logo


Facebook's Frank Frankovsky announces the formation of a non-profit foundation to oversee the Open Compute Project, which focuses on developing open source hardware designs. Photo by Colleen Miller.

Is the data center industry on the verge of a revolution in which open source hardware designs transform the process of designing and building data centers?  The Open Compute Project, an initiative begun in April by Facebook, is gaining partners, momentum and structure.  Yesterday it unveiled a new foundation and board to shepherd the burgeoning movement.

While the Open Compute initiative is focused on the needs of Internet companies with huge "scale out" infrastructure, the list of marquee names at yesterday's summit hinted at a future in which the benefits of open source hardware could expand to the enterprise market.

"What began a few short months ago as an audacious idea — what if hardware were open? — is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum," said Frank Frankovsky, Director of Hardware Design and Supply Chain at Facebook. "We are officially on our way."

"This is a momentous time in our history," said Andy Bechtolsheim, a board member of the new Open Compute Foundation. This is the future of efficiency and large-scale design in the data center."

The Open Compute Project was launched in April to publish data center designs developed by Facebook for its Prineville, Oregon data center, as well as the company’s custom designs for servers, power supplies and UPS units. Facebook’s decision to open source its designs prompted expectations that the move could democratize data center infrastructure, making  cutting-edge designs available to companies that can’t afford their own design team.

If the project doesn't succeed, it won't be for lack of support. Yesterday's second Open Compute Summit in New York featured appearances from executives for some of the sector's leading names - Intel, Dell,  Amazon, Facebook, Red Hat and Goldman Sachs. The audience was filled with data center thought leaders from Google, Microsoft, Rackspace and many other companies with large data center operations.

That turnout is not an isolated event, but reflects a growing focus on collaborative projects to reduce cost, timelines and inefficiency in data center construction and operation. The Open Compute project is just one of a handful of initiatives to bring standards and  repeatable designs to IT infrastructure. These include the Open Data Center Alliance, Open Networking Foundation, Open Source Routing Forum and OpenStack Foundation to develop a cloud computing platform. What's driving all this openness?

"Some of the 'rules' that drive our industry are wrong, and sharing data will help change that," said James Hamilton, a Distinguished Engineer at Amazon Web Services, who noted shifts in industry practice on data center temperature and humidity.

"Progress happens when people get frustrated with something," said Bechtolsheim, founder of Sun Microsystems and now Arista Networks, a fast-growing player in the networking industry. "This is the first time we have a true standard where companies don't have to reinvent (their data center technology). This principle could be expanded. In this new world, we believe the effect will be very similar to the impact of open source software."

One of the critiques of the Open Compute designs is that they are optimized for companies running huge, homogenous Internet infrastructures and are not appropriate for many enterprise data centers.  Frankovsky says this is an important focus.

"Scale computing has specific needs," he said. "Focusing on this space and its efficiency is one of our key points. By binding together as a community, our voice will be better heard on scale computing."

There are signs that the Open Compute designs could become more practical for a broader array of data center customers in the future. One of the new participants in the project is Digital Realty Trust, the world's largest operator of third-party data center space.  Frankovsky said Digital Realty is interested in developing approaches to adapting some of its build-to-suit designs for companies adopting Open Compute designs.

Missing from the dais were companies specializing in power, cooling and mechanical design - areas where Open Compute designs are being shared. "There is absolutely a role for the power and cooling vendors," said Frankovsky. "I think that would probably be the next wave of contributions you would see."

Will open hardware change the way data centers are designed and built? "We're at a crossroads," said Jimmy Pike, Chief Architect at Dell Data Center Solutions. "We're at a time when we can work together and share knowledge to help things happen quickly."

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like