Skip navigation
Guide to Facebook’s Open Source Data Center Hardware
A collage of profile pictures makes up a wall in the break room at Facebook’s data center in Forest City, North Carolina. (Photo by Rainier Ehrhardt/Getty Images)

Guide to Facebook’s Open Source Data Center Hardware

Mark Zuckerberg’s social networking giant is the world’s biggest open source hardware design factory

When Facebook rolled out the Open Compute Project in 2011, it started somewhat of a revolution in the data center market. In a way, that revolution had already been going on; Google had figured out it was better off designing its own hardware than buying off-the-shelf products from the top vendors, and some time had passed since Facebook reached that point too.

But OCP, the now non-profit organization that aggregates open source hardware and data center designs and promotes applying the open source software ethos to the world of hardware, has become a hub of sorts, where vendors and operators of some of the world’s largest data centers come together to build the next wave of internet infrastructure driven by actual specific requirements of the data center operators rather than vendors’ own ideas about market needs.

Both Microsoft and Apple have joined OCP, and Microsoft has already contributed multiple cloud server designs to the open repository. Google joined this year, announcing it would contribute a data center rack and power distribution design it has been using in its facilities. A host of telcos are involved, as they transform their infrastructure to support Software Defined Networking and Network Function Virtualization, and some of the biggest financial services firms, who need more computing capacity today than they’ve ever needed in history and are looking for the most cost effective ways to build out that infrastructure.

Read more: What Enterprise Data Center Managers Can Learn from Web Giants

Facebook of course was the first contributor of intellectual property to the open source project and has contributed more designs of servers, electrical infrastructure components, network hardware, and software than any other company.

Here’s a guide to all the Facebook data center hardware contributed so far:

Data center, open sourced in 2011: One of the first things Facebook open sourced was the design spec for its data center in Prineville, Oregon, the first facility the company designed and built for itself to replace leased facilities. The document describes mechanical and electrical specifications created to maximize efficiency of its web-scale infrastructure.

Triplet rack, open sourced in 2011: Facebook deployed servers in Prineville in what it calls “triplet racks.” Each group of three 42U racks holds a total of 90 servers and has two top-of-rack switches.

Battery cabinet, open sourced in 2011: Facebook has a dedicated backup battery cabinet for each pair of triplet racks, ready to supply DC power to six racks in case the main AC power supply is interrupted. The cabinets replace traditional UPS systems used in data centers.

Freedom server, open sourced in 2011: The Server V1 design, also known as “Freedom,” features a custom server chassis that go into the triplet racks enables installation of components without any tools.

Spitfire server (AMD), open sourced in 2011: This is a variant of an OCP motherboard for AMD chips.

Power supply, open sourced in 2011: This power supply is what enables OCP servers to run on AC power but take DC power from the battery cabinets when main power gets interrupted. The self-cooled power supply features a converter with independent AC and DC output connectors and a DC input connector for backup voltage. The design’s main focus is high energy efficiency.

Windmill, open sourced in 2012: The Server V2 design, also known as “Windmill,” was a power-optimized, bare-bones motherboard for Intel Xeon processors designed to provide lowest capital and operating costs. It did away with many features that vendors usually include in servers but that aren’t necessary for Facebook’s needs.

Watermark server (AMD), open sourced in 2012: In 2012, Facebook contributed a V2 server design for AMD Opteron processors. It was designed with the same power and cost saving design principles that were employed in designing Windmill.

Mezzanine card V1, open sourced in 2012: Facebook’s first mezzanine cards for Intel V2 motherboards offered extended functionality, such as support for 10GbE PCI-E devices.

Open Rack V1, open sourced in 2013: Facebook’s first rack design was created to maximize operational efficiency. It required things like tool-less routine service procedures, no vanity features, direct integration with air containment solutions, the ability to do installation and operations work in the cold aisle, and data cables in front.

Winterfell, open sourced in 2013: Winterfell was a web server with three x86 server nodes in an OCP chassis.

Knox, or Open Vault, open sourced in 2013: The Open Vault was a storage solution for the Open Rack with modular I/O topology. It was optimized for high disk density, holding 30 drives in a 2U chassis, and could work with almost any host server.

Mezzanine card V2, open sourced in 2014: Based on the original OCP mezzanine card, this card’s mechanical and electrical interface was extended to accommodate new use cases.

Cold Storage, open sourced by Facebook in 2014: This is a storage server designed for data that’s accessed less frequently, such as old Facebook photos. It is optimized for low hardware cost, high capacity and storage density, as well as low power consumption. Facebook built separate simplified data centers just to house these cold storage servers (Photo: Facebook)

Panther Micro Server, open sourced in 2014: The microserver is a PCI-E-like card with an SoC (Server-on-Chip), memory, and storage for the SoC. It can be plugged into baseboard slots used for power distribution and control, BMC management, and network distribution. It can be applied to servers, storage, or networking devices.

Open Rack V2, open sourced in 2014: The second-generation rack increased the maximum weight of IT gear that can be installed in the rack from 950 kg to 1,400 kg and increased height from 2,100 mm to 2,210 mm.

Honey Badger, open sourced in 2014: The lightweight Honey Badger compute module turns an Open Vault from a JBOD (Just a Bunch of Disks), which needs to be controlled by a host server, into a full-fledged storage server in its own right.

Wedge, open sourced in 2015: The Wedge switch was Facebook’s first foray into designing networking hardware. The design team gave this top-of-rack switch the same power and flexibility as a server, with flexible hardware configuration, including the ability to use Intel, AMD, or ARM processors, thanks to the use of the Group Hug architecture.

6-Pack, open sourced in 2015: The 6-Pack is a core switch that followed the Wedge top-of-rack box. It sits at the core of Facebook’s data center network fabric and includes six Wedge switches as its basic building blocks, hence the name.

Yosemite, open sourced in 2015: Yosemite is Facebook’s multi-node server platform. It hosts four OCP-compliant one-socket server cards in a sled that can be plugged directly into the Open Rack.

Wedge 100, Facebook’s second-generation top-of-rack data center switch, which supports 100 Gigabit Ethernet. Accepted as official OCP spec in October 2016.

Backpack, Facebook’s second-generation modular switch, features fully disaggregated architecture that uses simple building blocks called switch elements, and it has a clear separation of the data, control, and management planes. Submitted to OCP in November 2016.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish