Skip navigation
An Open19 brick Open19
An Open19 brick server

Data Center World: Why Open19 Designs Matter for Edge Computing

LinkedIn's Yuval Bachar, one of the primary driving forces behind Open19, explains the effort at Data Center World.

On the opening day of this year's Data Center World in Phoenix, Yuval Bachar, LinkedIn's principal engineer of data center architecture, was on hand to explain why the social network's Open19 Project will be an important part of data centers' move to the edge.

He was speaking just a few days after Zaid Ali Kahn, LinkedIn's head of infrastructure engineering, announced at OPC Summit in San Jose that the company was contributing the Open19 specifications to the Open Compute Project. 

The Open19 specifications center around standardized equipment form factors, "cages," connectors, cables, power shelves, and network switches that LinkedIn has developed for use in its data centers. The specs are designed for the 19-inch cabinets that are already standard data center fare, which is how Open19 got its name.

Bachar said the design will be especially important at edge locations, which are any places away from large centralized data centers where servers are placed in close proximity to connected devices in order to reduce latency. This can be anyplace from server closets in branch offices to computers controlling robotics on factory floors to the equipment telecoms place at cell tower locations to handle the data being generated from mobile traffic, and more.

Yuval Bachar, principal engineer, data center architecture, LinkedIn, and president and chairman of the Open19 Foundation, speaking at the foundation's 2018 summit.

Yuval Bachar, principal engineer, data center architecture, LinkedIn, and president and chairman of the Open19 Foundation, speaking at the foundation's 2018 summit.

The rush to the edge is only getting started, Bachar noted, as the need for reducing latency will only increase with the advent of emerging technologies such as 5G-enabled self-driving cars. 5G promises streaming rates measured in gigabytes per second for each and every connected device and will result in the need for edge data centers to be constructed in locations that are far from ideal.

"It can be at the base of the cell tower in a container, it can be in a room under the cell tower, or it can be a mile away from the cell tower in some kind of a local office," he said. "At some cell towers you're going to get very rich and stable power. In some environments you're going to get unstable power, not really what you need. The rack system is completely out of control, so you have no idea what you'll find when you get to a location."

Bachar added that in the US there is currently something like 50 to 100 different kinds of cell towers at over 200,000 locations. Although some of these locations will have nearby IT staff available to take care of technical issues as they arise, hardly any will have IT staff on-premises, and in many cases technicians might be a day or more away.

"If you build a solution that is not generic, you will not be able to address all those needs," he said.

The Open19 specifications are designed to ease both deployment and maintenance, but also to assure a hands-off stable system. Servers or other equipment, here called "bricks," are mounted in the cages installed in 19-inch racks. In addition, each cabinet contains power and network switching units, with each shared by servers in two cages.

While an Open19 cabinet looks pretty much like any run-of-the-mill cabinet from any colocation facility, the differences become apparent when looking at the cabinet from the rear.

"We're creating a virtual chassis in the back of those cages, and we connected them with special cables, which actually creates a chassis-like environment," he said. "This cable system is actually giving you the opportunity to eliminate all the cables in the system and eliminate all the power distribution system."

With no power supplies on the servers, there's no need for a power distribution system in the rack. The distributed power from the shelf supplies 400 watts per server, which is isolated from the other servers. All cabling, both for power and networking, is piped through a rigid cabling system that's designed to be somewhat plug-and-play.

Because of the limited space at most edge locations, high compute density is also usually necessary. For this reason, the Power19 specification will soon utilize direct-to-chip embedded liquid cooling, using 3M's Novec 7000 cooling fluid, with the cooling for each cabinet being self contained. This system, Bachar said, can "very easily" cool 450-watt CPUs.

"We don't know very well how to do this with air, to be honest," he said. "You see these monstrous data centers designed with heat sinks that are the size of the whole server just to be able to run air on a 200 watt CPU. With liquid cooling you can actually cool whatever you want, without heating up the environment."

According to Bachar, cutting the power needed for cooling isn't the only green aspect of Open19. As an example, he said the power system, which removes a stage of power conversion, is over 96 percent efficient.

"If you look at typical data centers which have two stages of power conversion, you get something in the range of 85 to 87 percent efficiency," he said. "This is a 10 percent increase off the bat, and you didn't do anything. Just because you skip the power supply in the server, you improve your data conversion efficiency dramatically."

Like everything else in tech, Open19 remains a work in progress. Bachar said one of the things he sees for the future is top-of-rack robotics that will be able to switch out servers after a failure.

Correction: March 25, 2019
Article originally stated that LinkedIn was actively considering adding robotics to Open19, which was in error.
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish