Can Open Hardware Transform the Data Center?

Facebook's Frank Frankovsky announces the formation of a non-profit foundation to oversee the Open Compute Project, which focuses on developing open source hardware designs. Photo by Colleen Miller.

Is the data center industry on the verge of a revolution in which open source hardware designs transform the process of designing and building data centers?  The Open Compute Project, an initiative begun in April by Facebook, is gaining partners, momentum and structure.  Yesterday it unveiled a new foundation and board to shepherd the burgeoning movement.

While the Open Compute initiative is focused on the needs of Internet companies with huge “scale out” infrastructure, the list of marquee names at yesterday’s summit hinted at a future in which the benefits of open source hardware could expand to the enterprise market.

“What began a few short months ago as an audacious idea — what if hardware were open? — is now a fully formed industry initiative, with a clear vision, a strong base to build from and significant momentum,” said Frank Frankovsky, Director of Hardware Design and Supply Chain at Facebook. “We are officially on our way.”

“This is a momentous time in our history,” said Andy Bechtolsheim, a board member of the new Open Compute Foundation. This is the future of efficiency and large-scale design in the data center.”

The Open Compute Project was launched in April to publish data center designs developed by Facebook for its Prineville, Oregon data center, as well as the company’s custom designs for servers, power supplies and UPS units. Facebook’s decision to open source its designs prompted expectations that the move could democratize data center infrastructure, making  cutting-edge designs available to companies that can’t afford their own design team.

If the project doesn’t succeed, it won’t be for lack of support. Yesterday’s second Open Compute Summit in New York featured appearances from executives for some of the sector’s leading names – Intel, Dell,  Amazon, Facebook, Red Hat and Goldman Sachs. The audience was filled with data center thought leaders from Google, Microsoft, Rackspace and many other companies with large data center operations.

That turnout is not an isolated event, but reflects a growing focus on collaborative projects to reduce cost, timelines and inefficiency in data center construction and operation. The Open Compute project is just one of a handful of initiatives to bring standards and  repeatable designs to IT infrastructure. These include the Open Data Center Alliance, Open Networking Foundation, Open Source Routing Forum and OpenStack Foundation to develop a cloud computing platform. What’s driving all this openness?

“Some of the ‘rules’ that drive our industry are wrong, and sharing data will help change that,” said James Hamilton, a Distinguished Engineer at Amazon Web Services, who noted shifts in industry practice on data center temperature and humidity.

“Progress happens when people get frustrated with something,” said Bechtolsheim, founder of Sun Microsystems and now Arista Networks, a fast-growing player in the networking industry. “This is the first time we have a true standard where companies don’t have to reinvent (their data center technology). This principle could be expanded. In this new world, we believe the effect will be very similar to the impact of open source software.”

One of the critiques of the Open Compute designs is that they are optimized for companies running huge, homogenous Internet infrastructures and are not appropriate for many enterprise data centers.  Frankovsky says this is an important focus.

“Scale computing has specific needs,” he said. “Focusing on this space and its efficiency is one of our key points. By binding together as a community, our voice will be better heard on scale computing.”

There are signs that the Open Compute designs could become more practical for a broader array of data center customers in the future. One of the new participants in the project is Digital Realty Trust, the world’s largest operator of third-party data center space.  Frankovsky said Digital Realty is interested in developing approaches to adapting some of its build-to-suit designs for companies adopting Open Compute designs.

Missing from the dais were companies specializing in power, cooling and mechanical design – areas where Open Compute designs are being shared. “There is absolutely a role for the power and cooling vendors,” said Frankovsky. “I think that would probably be the next wave of contributions you would see.”

Will open hardware change the way data centers are designed and built? “We’re at a crossroads,” said Jimmy Pike, Chief Architect at Dell Data Center Solutions. “We’re at a time when we can work together and share knowledge to help things happen quickly.”

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)


  1. mifd118

    open source hardware? great but dont let apple hear about it. they dont like the word open & will go out of their way to try to crush it.

  2. Hi Rich - I work for an architecture firm here in Houston that specializes in data center design. The layout and structure of a data center facility is so important. I think the Open Compute project is great - collaboration and innovation makes for better ideas and practices. It'll be interesting to see how data center design will be affected from an architectural standpoint. Anyways, thanks for sharing! - Aly

  3. kcj

    What the crap mifd118? Go google "CUPS" and "Darwin OS".

  4. Ric

    Can't crush a wave.Simple. I'm doing the same with the hardware im building.Is is our goal that anyone should be able to produce a compatible product that can plug in the frames. Many manufacturers that can supply the consumer with the appropriate a good thing.Same thing for software.Once the main release os out the doors with the conplete hardware, the software is opened and stays that way.No release bla bla .. Open open open. IF you let the people play , they will have fun.Close the soft and hardware , and you're just another box maker. RIc

  5. How about someone beef up a Raspberry Pi?

  6. Jonas

    kcj: Kind of ironic that CUPS wouldn't be allowed to be distributed through the App Store, isn't it? (Apple strictly forbids free software in the copyleft sense.)

  7. The bigger picture, somethign I've aspired to for over twenty years, and it is great to see it finally happening in bigger and bigger ways: "The OSCOMAK project will foster a community in which many interested individuals will contribute to the creation of a distributed global repository of manufacturing knowledge about past, present and future processes, materials, and products."

  8. Peter Grafix

    KCJ, Apple killed the Darwin OS after people tried using it to put MacOS on non-Apple computers. Apple purchased the company that produces CUPS because they use it themselves in MacOS, and had their own preferences regarding what Open Source license the project would use. There's no guarantee that CUPS will remain open at all. Apple continues to threaten to kill Open Source video codec projects with their patents, and indeed has organized a patent coalition with other companies to do so.

  9. Wesley Parish

    Well, it's kind-of late, but better late than never. If, like me, you've ever had to work on let alone dismantle IBM PCjr and early PS2 boxes, you'll find yourself wanting to dismantle the engineers who designed the ghastly things. Partly in response to such experiences, and partly in response to my own questions about the efficiency of a system where a significant part of the computer's energy budget went on running cooling fans, I had worked out by 2003 that the most efficient means of cooling the system down was to design the computer box and mobo as a unified whole - to cut a long story short, as a static ramjet where the excess heat drives the hot air out by the simple operations of expansion and increasing speed of movement. (It's a static ramjet, but all it needs to move is the air, not the box - if it did move the box, you'd justifiably panic! :) I wasn't able to get any help in further research, let alone development (New Zealand is known for its support of research and development in the breach, not in the observance:my grand-uncle Sir E Bruce Levy was one of thye only researchers of his era, when Karl Popper roasted NZ academics for their lackluster performance in research - but then my grand-uncle was an agronomist), so I dropped it off in 2003 or 2004 at OpenCores or OpenHardware, I forget just which, and got at least two comments. It would delight me immensely to have the opportunity to research it further and develop it to the degree that a simple supercomputer on board the space station would be able to provide air conditioning as well as information processing ...