Skip navigation
An Open19 brick Open19
An Open19 brick server

LinkedIn Says Its Open19 Server Design Is Ready for Prime Time

Says will open-source hardware platform, including network switch, power shelf, and cabling system, in coming weeks and months.

Hardware designs LinkedIn created to lower costs and speed up its data center deployment are now ready for primetime, the social network said Thursday.

LinkedIn first revealed the initiative, called Open19, more than two years ago and this July said it was putting finishing touches on the first deployment. The deployment of Open19 gear inside the Microsoft-owned company’s data centers is now in full swing, Yuval Bachar, a top LinkedIn data center engineer, wrote in a blog post.

In the coming weeks and months, the company is planning to open-source “every aspect of the Open19 platform – from the mechanical design to the electrical design – to enable anyone to build and create an innovative and competitive ecosystem,” he wrote.

LinkedIn timed the announcement to coincide with the Open19 Summit in San Jose. The event is organized by the Open19 Foundation, which LinkedIn co-launched with Hewlett Packard Enterprise, GE, and electronics manufacturer Flex, among others.

Design details here: LinkedIn's Open19 Data Center Hardware Platform

Today, Open19 defines four standard server form factors (chassis dimensions), two “cages” for those servers to slide into, power and data cables, a power shelf, and a network switch.

The standard does not describe electronic components of the servers, but it does describe what goes inside the switch, including the type of processor (Broadwell-DE CPU) and operating systems (ICOS and SONiC).

The overall idea behind the design is to minimize the amount of work it takes to deploy servers in a data center. The cages go into standard 19-inch server racks; technicians can slide any of the four standard server “bricks” into the cages and quickly supply them with power and network links, using a single connector per server.

LinkedIn also wanted to standardize hardware deployment across both core and edge data centers. Edge locations, which in LinkedIn’s case are probably in colocation data centers, don’t have LinkedIn technicians onsite. The simple design means the company doesn’t have to hire highly trained engineers every time it has to deploy new servers in a remote location.

This aspect of the design is what attracted some companies specializing in edge data centers to Open19. The foundation’s co-founders Vapor IO and Packet are building some of the world’s first “extreme-edge” data centers, deploying servers at or near cell-tower sites.

The vision is to eventually have thousands of small edge computing sites dot the map. That vision would be extremely expensive to realize if every server deployment required hiring a highly skilled engineer.

Earlier this year, LinkedIn joined the Open Compute Project, an open-source data center technology ecosystem that covers everything from data center power and cooling systems to networking software. Overlap between Open19 and OCP is minimal, because Open19 doesn’t specify internal server components, such as processors, memory, or motherboards.

LinkedIn said it joined OCP because the demand for computing capacity on its platform had reached a rate of growth that required approaches to infrastructure build-out similar to the ones used by hyperscale platforms, such as Facebook and Azure, the cloud platform operated by LinkedIn’s parent company Microsoft, one of OCP’s biggest backers.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish