Skip navigation
LinkedIn’s Data Center Standard Aims to Do What OCP Hasn’t
Open19 server chassis (Photo: Open19)

LinkedIn’s Data Center Standard Aims to Do What OCP Hasn’t

Open19 aims to make new breed of data center hardware easy to source for smaller IT shops

While fomenting a full-blown revolt against the largest American hardware vendors’ once-outsize influence on the hyper-scale data center market, by many accounts Facebook’s Open Compute Project has yet to make a meaningful impact in smaller facilities that house the majority of the world’s IT infrastructure.

OCP hardware has been difficult to source for companies that buy in much smaller volumes than do its two biggest users – Facebook and Microsoft – and if you don’t want to redesign your data center to support the standard OCP requirements, your already slim vendor choice for OCP gear that fits into standard 19-inch data center racks is narrowed further.

That’s the problem Open19, a new data center standard developed by LinkedIn, aims to solve. It promises a way to build out data centers that’s both compatible with traditional data center infrastructure and simple and quick enough to meet the servers-by-the-ton pace of hyper-scale data center operators.

It will be a lot easier for companies to adopt Open19 “because they don’t need to change the basic infrastructure,” Yuval Bachar, LinkedIn’s principal engineer for global infrastructure architecture and strategy, said in an interview with Data Center Knowledge.

Today, LinkedIn is launching a non-profit foundation in an effort to grow an ecosystem around its data center standard. And it’s recruited some heavyweight founding members – GE Digital, Hewlett Packard Enterprise, and the multinational electronics manufacturing giant Flex (formerly Flextronics) – in addition to the data center infrastructure startup Vapor IO.

The Open19 Foundation’s charter is to “create project-based open hardware and software solutions for the data center industry.” Similar to the way the Open Compute Foundation (which oversees OCP) works, Open19 will accept intellectual property contributions from members, LinkedIn’s hardware spec being the first one.

See also: Why OCP Servers are Hard to Get for Enterprise IT Shops

Microsoft Keeps an Open Mind

It’s unclear how complete Open19 is at the moment, or to what extent hardware built to the standard has been deployed at LinkedIn data centers. Bachar said the hardware has not yet reached production level.

The cloud data center hardware team at Microsoft, which acquired Linked last year, started standardizing on OCP across its entire global footprint in 2014, when the company joined the project. Its latest-generation cloud server design, still in the works, makes adjustments to ensure easier installation in colocation data centers around the world, including a 19-inch rack and a universal power distribution unit that supports multiple international power specs.

Read more: Meet Microsoft, the New Face of Open Source Data Center Hardware

Whether Microsoft will eventually integrate LinkedIn’s data center infrastructure with its own, and whether it will decide that it would be advantageous to run the social network on the same type of hardware that runs the rest of its services is unknown at this point.

Kushagra Vaid, who oversees cloud hardware infrastructure at Microsoft, told us in March that the company was far from making a decision about LinkedIn’s data centers. “We haven’t really started talking about it,” he said. “We’re going on two clouds for now.”

He added that there were elements of LinkedIn’s standard that he liked: “There are some good things in Open19.”

Bachar said he could not comment on what Microsoft’s plans would be, saying his team was continuing to be focused on building an infrastructure that would improve performance for LinkedIn members. “For LinkedIn, this is the future of how we build our … data centers.”

Bricks and Cages

There are other key differences between OCP and Open19, beyond the form factor. Unlike OCP, LinkedIn’s standard doesn’t specify motherboard design, types of processors, network cards, and so on. It also doesn’t require that suppliers that want to sell Open19 gear open source their intellectual property.

“When we built OCP, we built it a s a community-led standards organization, where companies and individuals could donate intellectual property and have that intellectual property be innovated against,” Cole Crawford, Vapor IO founder and CEO and former executive director of the Open Compute Foundation, said in an interview with Data Center Knowledge.

“Open19 is a standard in and of itself,” specifying a common chassis and network backplane but not the electronics inside, he went on. “Whatever exists inside of that chassis … that can be differentiated by OEM (Original Equipment Manufacturer), by an ODM (Original Design Manufacturer), with no [IP contribution] requirements at all.”

Open19 describes a cage that can be installed in a standard rack and filled with standard “brick” servers of various width and height (half-width, full-width, single-rack unit height, double height). It also includes two power shelf options, and a single network switch for every two cages.

A data center technician can quickly screw the cage into a rack and slide brick servers in, without the need to connect power and network cables for every node.

Standardizing All the Way to the Edge

Another way Open19 stands out is by standardizing both core data centers and edge deployments, an increasingly important and growing part of the market. As digital services have to process more and more data to return near-real-time results, companies put computing infrastructure closer to where the data gets generated or where the end users are, places like factory floors, distribution warehouses, retail stores, wireless towers, and telco central offices.

Edge is a key play for Vapor IO, whose Vapor Chamber and remote data center management software are designed for such deployments.

Edge data centers are also key to GE Digital’s major play, its industrial internet platform Predix, which collects sensor data from things like jet engines or locomotives and analyzes it to predict failure for example. It is a cloud platform for developers building these industrial internet applications, and as such requires a highly distributed, global infrastructure. Different data center standards across suppliers and geographies have made the process of building this platform difficult, Darren Haas, VP of cloud engineering at GE Digital, said in a statement.

“Predix extends our capabilities across all form factors — from the edge all the way through to the cloud,” he said. “We built Predix so developers can create software that moves between the various form factors, environments and regions, but we still wrestle with different standards and systems by node, region and vendor.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish