1U Open19 server 'brick' Open19

Open19 V2 Keeps Pace With Data Center Evolution

The second iteration of of the Open19 specification helps prep data centers for much higher rack power densities.

The Open19 Project has come a long way since it was first announced by Yuval Bachar at the 2016 DatacenterDynamics Webscale conference.

At the time, LinkedIn (then Bachar's employer) was the sole participant, but within about 10 months the Open19 Foundation was created to run the project, with Flex, GE Digital, HP Enterprise, and the edge data center startup Vapor IO, joining as founding members.

Open19 basics

The initial goal was to use work that LinkedIn had done to customize the 19-inch racks used in its data centers to create a specification for standardizing racks for easier deployment, in a way that would accommodate physical equipment standards that were already in place.

"We knew that we wanted to stay within the form factor of the supply chain from a motherboard perspective, because changing the motherboard size only really works if you can make enough volume to justify your own computers," Zachary Smith, president of the Open19 Foundation's board of directors and head of edge infrastructure at Equinix, recently told Data Center Knowledge. "If you can't, you need to slipstream into where the rest of the industry goes, which is around half-width or full-width motherboard design.

"Open19 is fully compliant with 19-inch and standard half-width and full-width motherboards, so its very easy for manufacturers to retrofit their existing investments to work within a standardized form factor," he added.

Smith likens this to the work started by Intel in the 1990s to standardize personal computers.

"The analogy I like to use for Open19 is that it's the ATX case of the data center rack," he said, and pointed to the era when manufacturers were offering computers in nonstandard cases housing motherboards that could only fit that particular enclosure. "Then the ATX case came out, where the white box industry decided that if we could have a motherboard that would always fit into a case, and the case could have the right place to put the power supply and whatnot, we could all do a lot better.

"We never really built that for the rack. Every manufacturer puts their power supply in a different area and everybody has cables coming in and out in a different place, which makes our job in the data center world complex."

The goal of Open19 is a little deeper than merely a standardized rack where everything is in its place, however. The goal is to make an installation as easy as sliding equipment into an opening in the front of a rack, tightening a few screws, and walking away with a fully wired and functioning server.

Move to the Linux Foundation

Last year the Open19 Foundation became a part of the Linux Foundation, a move that Smith said has made it easy for the thousands of organizations associated with the Linux Foundation to join and participate in the project. As examples, he pointed to Linux foundation projects like RISC-V, open networking projects, and the Cloud Native Computing Foundation.

"There was no data center hardware project that tied all this stuff together, so it's really exciting to be working with the different projects at Linux Foundation," he said. "We're relatively niche, there's only so many people in the world who care about innovating on rack level infrastructure for enterprise and edge data centers. I would say there's dozens or hundreds of people who care deeply about this from a supply chain perspective."

In addition to putting it in close proximity to organizations that can benefit from the project, its status as a Linux Foundation project has also opened the door for Open19 to make itself better known through avenues such as participating in the numerous Linux Foundation conferences.

Open19 v2

Almost immediately after coming under the Linux Foundation's umbrella, Open19 went to work on a new version of its specification which addresses many of the changes that have been happening in data center space since LinkedIn first announced the project six years ago.

The original specification included design components for three different "brick cage" implementations for housing multiple servers or storage devices in a single rack-mountable unit; a power shelf, which did away with the need for each server or storage device to have its own power supply; and snap-on power and data cables that eliminate the labor intensive task of managing cables.

It also introduced blind mate connectors for both power and data connections, which meant that after a cage's initial installation, no technician would need to go to the rack's back side but could take care of all operations from the front.

The specification also included the option for running 12 volt DC power into the rack for the relatively few data centers that were then standardizing on DC instead of AC power.

In the years since Open19's original specification was developed, the use DC power has become more important; data center operators are increasingly turning to it because it's more efficient than AC, which has to be converted to DC inside the rack to run equipment, resulting in a considerable loss of power.

"In Open19 v1, LinkedIn did a great job for the time when they started the spec, but what they didn't see at the time was industry adoption of 48 volt power, and we do acknowledge now in the v2 spec that 48 volt power has become prevalent," My Truong, Open19's chief architect and field CTO at Equinix, told DCK.

That prevalence was brought about by the Open Compute Project, which specifies open standards for the entire data center infrastructure stack, and included 48 volt DC in its OCP v3. With Open19 and OCP now agreeing to a 48 volt standard, server manufactures will be able adopt the voltage as a single standard for DC powered servers, instead of having to offer servers for multiple voltages.

One advantage of the new DC specification is it will increase the wattage available to servers, which are increasingly requiring more power.

Truong said that while Open19's original specification could deliver 400 watts per brick, the voltage increase will allow for the delivery of 3.5kW per brick. In addition, the new spec continues to allow operators to bring 380 volt DC into the power shelf which will, among other things, allow data centers to bring power from hydrogen fuel cells directly to their racks.

Another big change in the new specification is that it brings liquid cooling into play. Under the new specification, racks designed for liquid cooling will connect the plumbing to bring coolant into and out of a server through blind mate connectors, making the installation of a liquid cooled server as easy as installing traditional air-cooled equipment.

"Once it's set up in the rear of the rack, the only way that you interact with that cooling mechanism is up front, because we're using blind mate connectors across the board," Truong said. "It's really meant to be a very simple operation."

He told us that although Open19 v2 is still in a draft state, "the major decisions have already been made."

"Having said that, there’s still opportunity for new members to provide feedback on the draft," he added. "We’re hoping to have it out by end of Q3 and the spec field test by the end of the year."

TAGS: Hardware
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish