LinkedIn headquarters in Mountain View, California. (Photo by Justin Sullivan/Getty Images)

LinkedIn headquarters in Mountain View, California. (Photo by Justin Sullivan/Getty Images)

LinkedIn Adopting the Hyperscale Data Center Way


This month, we focus on the open source data center. From innovation at every physical layer of the data center coming out of Facebook’s Open Compute Project to the revolution in the way developers treat IT infrastructure that’s being driven by application containers, open source is changing the data center throughout the entire stack. This March, we’ll zero in on some of those changes to get a better understanding of the pervasive open source data center.

LinkedIn’s need for scale has never been higher than today, and the social networking company is adopting a lot of the same approaches to building hyperscale data center infrastructure companies like Google, Facebook, and Microsoft have been using.

Those approaches include designing custom hardware, software, and data center infrastructure, and sourcing hardware directly from design manufacturers, bypassing the leading big IT vendors, such as HP, Dell, or Cisco.

“We took our data center through a transformation,” Yuval Bachar, LinkedIn’s principal engineer of global infrastructure architecture and strategy, said. “We have been working on this for the last eight months.”

The first place where the company is applying the new infrastructure strategy is its new data center outside of Portland. The facility, which LinkedIn is leasing from Infomart Data Centers, features custom electrical and mechanical design, as well as custom network switches.

It is the first data center designed to enable the company to go from running on tens of thousands of servers to running on hundreds of thousands of servers.

The other LinkedIn data centers, located in California, Texas, Virginia, and Singapore, will transition to the new hyperscale infrastructure gradually, Bachar said.

Infomart's Portland data center in Hillsboro, Oregon (Photo: Infomart Data Centers)

Infomart’s Portland data center in Hillsboro, Oregon (Photo: Infomart Data Centers)

Homebaked 100G Switches and Fabric

The biggest part of the transformation was rethinking the way the company does networking inside its data centers. It has designed its own 100 Gigabit switches and a scale-out data center network fabric.

The plan is to use the same kind of switch in all LinkedIn data centers. Today, the company has a mix of whitebox switches designed to its spec and regular switches by the big well-known vendors.

LinkedIn went with 100G as the baseline networking technology because it will eventually need that kind of bandwidth (it doesn’t today) and because the technology enables it to get 10G, 25G, or 50G switching meanwhile, Bachar explained.

Using the PSM4 optical interface standard, LinkedIn engineers split 100G into two 50G ports. This enabled them to use the latest switching technology a lot cheaper than 40G optical interconnect, according to Bachar.

“It’s the most cost-effective solution today to connect with such high bandwidth,” he said.

You can read more on LinkedIn’s network fabric in a blog post by Bachar.

Mega Scale at High Power Density

At this point, LinkedIn has not started designing its own servers the way other hyperscale data center operators do. It does, however, buy servers from the same original design manufacturers, picking what they have on the menu, with some configuration modifications.

For the next generation, LinkedIn is “definitely considering” having servers designed to its own specs for better cost efficiency, Bachar said.

The new fabric enables the company to switch to a high-density data center design – the one thing that is radically different from the low-density, highly distributed model Facebook and Microsoft use.

The data center in Oregon will have 96 servers per cabinet. It is slightly below 18kW per cabinet today, he said, but the cooling design allows densities up to 32 kW per rack. For comparison, the average power density in Facebook’s data centers is about 5.5kW, according to Jason Taylor, VP of infrastructure at Facebook.

One other internet giant that has gone the high-density route is eBay.

To cool this kind of density, LinkedIn is using heat-conducting doors on every cabinet, and every cabinet is its own contained ecosystem. There are no hot and cold aisles like you would find in a typical data center.

“Everything is cold aisle,” Bachar said. “The hot aisle is contained within the rack itself.”

The decision to use a high-density design was made after a detailed analysis of server, power, and space costs. It turned out high density was the most optimal route for LinkedIn, he said.

The main reason it was the most optimal is that the company uses leased data center space, so it has space and power restriction the likes of Facebook or Google, who design and build their own data centers, don’t have, Bachar explained.

On Board with Open Innovation

It is also the reason LinkedIn decided against using Open Compute Project hardware, which is not designed for standard data centers and data center racks.

Bachar said LinkedIn didn’t have any immediate plans to join OCP, the Facebook-led open source hardware and data center design effort, which lists Apple, Microsoft, and now also Google as members. But the company does share the ideals of openness that underpin OCP, he said.

LinkedIn will make some of the infrastructure innovation it’s done internally publicly available, be it through OCP or another avenue. “We will share our hardware development and some of our software development,” Bachar said.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

San Francisco-based business and technology journalist. Editor in chief at Data Center Knowledge, covering the global data center industry.

Add Your Comments

  • (will not be published)