Video: Facebook’s Next-Generation Servers
December 3rd, 2012 By: Rich Miller
FOREST CITY, N.C. – Delivering status updates and photos to 1 billion users around the globe requires a powerful Internet infrastructure. Facebook is constantly updating its technology to ensure that its servers and storage are as fast and efficient as possible.
That means that the company’s data centers are evolving as well. That trend is evident at Facebook’s campus in Rutherford County, North Carolina, where the company has built two massive server farms. From the outside, the two 300,000 square foot buildings look almost identical. But even though the data centers were built just a year apart, there are key changes on the inside, including technology updates for Facebook’s servers, storage, racks and cooling system.
The pace of change in Facebook’s infrastructure is driven by two engines – ongoing research by the the company’s design team, and innovations developed by the larger community of engineers in the Open Compute Project, the non-profit established to publish the hardware designs that Facebook developed for its first company-built data center in Prineville, Oregon.
The initial server design for Prineville featured one server per sled, each 1.5 rack units high. The first building in North Carolina, known as FRC1, features a custom server design known as Windmill. But it now houses two servers per sled, doubling the density in the same rack space.
The evolution continues at the newest Facebook data center, FRC2, where the company is test-driving its new Open Rack enclosure design and next-generation Open Compute server, which retools the entire design to house three 2U servers per sled, and also overhauls the power supplies. And then there’s “Knox” – the Open Compute storage design, which shares the enclosures with the new servers.
Last week Data Center Knowledge got a tour of Facebook’s North Carolina campus. In this video, Keven McCammon from the Facebook data center operations team takes us into the data halls of both buildings in Rutherford County to show us how the designs have evolved. This video runs about 9 minutes.
For a deeper dive into the details of these newest designs, check out DCK videos from earlier this year, when Facebook hardware designer Matt Corddry provided a detailed look at prototypes:
Facebook’s Open Rack Revealed A new single-width rack replaces the three-wide “triplet” racks used in the Facebook’s first data center in Prineville, Oregon. The racks have widened the equipment trays to 21 inches (from the traditional 19 inches) while retaining the standard 24-inch footprint. Facebook has reworked the power system. Power supplies are now separate from the server motherboards and reside in a “power shelf” at the base of the rack, where they tie into the busbar at the rear of the unit. The 12V power is then distributed across three busbars that connect to the servers, a design that Facebook says improves the efficiency of its power distribution.
Facebook’s Windmill Server Design provides a first look at the new Open Compute server design. Each module is about seven inches wide, allowing three to fit in each tray. The taller design allows Facebook to improve the airflow through the server by using 80 millimeter fans, a change from the 60 millimeter fans in the first generation of servers.
Facebook’s Storage Design Prototype offers a video overview of the “Knox” storage sled, which houses up to 15 disks in trays that slide in and out of the rack for easy maintenance. Knox features a hinge that allows each tray of disks to hang at an angle vertically, after it slides out of the rack, making it easier for staff to replace disks in the upper area of the storage rack.
Coming Tomorrow: A Video Overview of the Latest Innovations in Facebook’s Cooling Design
On the second version, were they separate disk on its own sled, are they using SaS interconnects directly to a modified JBOD?
Also – are these boxes running HDFS?