v2construction-web2

Will Open Compute Alter the Data Center Market?

4 comments

Construction is underway on the V2 data center on the Santa Clara, Calif. campus of Vantage Data Centers. The V2 design will feature some elements common to the Open Compute initiative unveiled last week.

Are the designs advanced by the Open Compute Project only useful for building huge single-tenant data centers like Facebook’s new Oregon facility? Or will these uber-efficient designs be available at your local colocation center?

We posed that question to some of the leading data center builders who lease space to enterprise customers in multi-tenant facilities. Their verdict: Open Compute designs are difficult to implement in colocation centers but may have a greater impact in the wholesale data center space.

The Open Compute Project was launched last week to publish data center designs developed by Facebook for its Prineville, Oregon data center, as well as the company’s custom designs for servers, power supplies and UPS units. Facebook’s decision to open source its designs prompted expectations that the move could democratize data center infrastructure, making  cutting-edge designs available to companies that can’t afford their own design team.

While it might be straightforward for companies to integrate the Open Compute designs when building their own data centers, Facebook’s customizations provide challenges in multi-tenant facilities. These include servers that operate at 277 volts of AC power instead of the traditional 208 volts, a cooling design optimized for warmer data  center temperatures, and the use of fresh air cooling instead of chillers – a strategy which works primarily in cool climates.

‘A Lot of Moving Parts’
“There are a lot of moving parts to consider,” said Ben Stewart, Senior Vice President for Facilities Engineering at Terremark. “We operate colocation centers. We build in the flexibility to accommodate all possible customers.  If we have a customer that wishes to deploy 277VAC servers, then we would have to design that capability into our portfolio of power offerings.

“However, if we were to design a collocation center that only offered 277VAC power distribution, that would severely limit the market we could serve,” he added. “Multi-tenant data centers are built to accommodate varying customer requirements and therefore cannot lock into any one design feature.”

The wholesale data center market, in which customers lease dedicated data center “pods,” offers greater possibilities than colocation, where tenants share more components of the power and cooling infrastructure. Three wholesale providers – data center REITs Digital Realty Trust, DuPont Fabros Technology and newcomer Vantage Data Centers – say they expect to see customers requesting some elements of Open Compute designs, but differ on how quickly that demand will materialize.

Some Concepts Being Implemented
Vantage Data Centers says it is already implementing some of the Open Compute design concepts, many of which are not unique to Facebook. “We are big believers in the design they’ve got,” says Jim Trout, President and CEO of Vantage. “We are actively in the process of creating something very similar. Our design is a little different, as we’re delivering a different low voltage to our customers. But we have a design that does not have PDUs (power distribution units).”

Vantage says its V2 data center at its Santa Clara, Calif. campus, which will come online later in 2011, will utilize a 400/230V distribution to the rack with no PDU’s. It also will feature a “penthouse” cooling tier that drops cool air into the server area.  Unlike the Facebook Oregon facility, Vantage will be equipped with chillers, as the SiliconValley climate supports fresh air cooling for 65 to 80 percent of the year.  Vantage is also using “zig-zag” transformer technology to further improve efficiency compared to conventional single phase distribution systems  at 240V or 277V.

Trout said custom designs have cost implications in construction.

“If you have a small building, this is tough to do,” said Trout. “We did it with a 9 megawatt facility. It would be very difficult to deploy this to a 1megawatt pod level. This would be more expensive for providers to do a pod by pod approach.”

Digital Realty: We May Use Open Designs
Digital Realty Trust, the world’s largest operator of data centers, believes it can be done.

“I think this could be feasible in some multi-tenant environments,” said Jim Smith, Chief Technology Office at Digital Realty Trust. “Overall, I view this approach as a great development and a great piece of design engineering. I expect that at Digital Realty Trust we will be taking advantage of the open source designs for some customers in the future.

“I can easily envision a scenario where even our standard 2N design could be retrofitted, or in fact designed for, this variant on power distribution (and) server layouts,” said Smith. “One of the key advantages of the Facebook team driving this as open source is the ability for small and medium sized customers to adopt this architecture. Imagine you are a large financial services firm looking to virtualize desktops, or a growing web-based service provider who wants to adopt this server architecture.

“You can now engage someone like Digital Realty Trust to build and operate an identical architecture at 1 megawatt scale,” Smith continued. “We can take advantage of the design time, cost benefits and technology innovation that Facebook pioneered, without the development overhead and related market risk.”

While Open Compute Project is relevant to the multi-tenant market, it’s not likely to be the world-changer many pundits have envisioned.

“Will it be disruptive? I don’t think so in the short-term,” Smith said. “This is still a very conservative industry – especially for enterprise users. But more customers are building IT infrastructure that is like Facebook’s – small variations of hardware deployed at larger scale. This model still fits well in a traditional data center at modest scale.”

DuPont Fabros Sees Open Design as ‘Next Chapter’
“We deem it more like the next chapter in the efficiency of a data center versus a disruptive force,” said Scott Davis, Executive Vice President of Data Center Operations at DuPont Fabros Technology, which operates large wholesale data centers in Virginia, Chicago and New Jersey and will soon expand into Santa Clara. “Companies like Facebook, Google, Apple and the other large Internet players will always drive the industry in new directions due to the sheer volume of servers and critical load they deploy, and the speed at which they adopt and refresh their server technology.

“The success of a multi-tenant data center is its ability to appeal to a broad spectrum of clientele,” Davis said. “It is our experience that with the exception of the large Internet players, the vast majority of wholesale data center consumers continue to seek more tightly controlled environmental operating conditions and very high degree of availability from the critical infrastructure. These customers do not scale nor refresh their technology at the same rate as a Facebook.

The Role of the Network
Davis noted that some of Facebook’s reliability features are made easier by the fact that it can easily shift workloads across a large network of data centers.

“Not all data center users will be able to operate in that fashion,” he said. “Most companies do not have multiple data centers, and can’t afford to have a data center out of service or unavailable as it is mission critical to their business.  These companies will continue to require multiple levels of redundancy in their data center.”

It’s not that Open Compute Project designs won’t impact multi-tenant facilities, he added. It’s that enterprise customers are primarily focused on uptime requirements and service level agreements (SLAs), and are open to change once new technologies have proven their reliability.

“This is not to say that this type of innovation will not drive changes in the industry and broaden people’s perspectives,” Davis said. “It most certainly will. However, the evolution and adaptation will be gradual and less disruptive.”

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

4 Comments

  1. Justin

    uhh... the PSU for Open Compute are 277V as has been stated... To suggest that this someone undermines the entire project in co-location is total BS. YOU SIMPLY SWAP PSUs FOR 208V UNITs DUH!!! Also your chillers etc have proven to not be necessary when used with container datacenters. The days of 58 year old baby boomers with their stupid chillers and stupid 1U closed chassis and 1970s thinking are finally over... Thank god for the environments sake!!!