The interior of new Facebook data center in Prineville, Oregon.

Facebook Unveils Custom Servers, Facility Design

10 comments

A look at the blue-lit servers inside the cold aisle of the new Facebook data center in Prineville, Oregon.

Facebook today unveiled details of its new technology infrastructure, which features custom-built servers, racks and UPS units that will fill its new data center in Prineville, Oregon. The project is Facebook’s first company-built facility, and is optimized from the two-story structure right down to the servers to reflect the company’s vision for energy efficient data center operations.

“Being able to design more efficient servers, both in terms of cost and power usage, is a big part of enabling us to build the features we add,” said Mark Zuckerberg, the CEO of Facebook, in a briefing in Palo Alto, California.

Facebook’s servers are powered by chips from both Intel and AMD, with custom-designed motherboards and chassis built by Quanta Computer of Taiwan. The servers use a 1.5U form factor, allowing the use of larger heat sinks and fans to improve cooling efficiency.

Facebook also said its is releasing its server and data center designs and mechanical drawings as part of the new Open Compute Project, which will make cutting-edge data center technology available under the  Open Web Foundation license. The initiative holds the promise of creating momentum for open standards for data center design, an area that has often been cloaked in secrecy.

Facebook VP of Technical Operations Jonathan Heiliger said the Prineville facility is operating at a Power Usage Effectiveness (PUE) of 1.07, placing it among the industry’s most efficient facilities. That efficiency will dramatically reduce the amount of power Facebook requires to run its data center and servers, said Heiliger, who said the company’s approach emphasizes the “negawatt.”

“That’s the watt you never see and never use,” said Heiliger. “We think that’s the most effective way for operators of large data centers to conserve energy.”

Here’s an overview of how Facebook is achieving those efficiencies:

Cooling Design
Facebook adopted the two-tier structure seen in several recent designs, which separates the servers and cooling infrastructure and allows for maximum use of floor space for servers.  Facebook opted to use the top half of the facility to manage the cooling supply, so that cool air enters the server room from overhead, taking advantage of the natural tendency for cold air to fall and hot air to rise – which eliminates the need to use air pressure to force cool air up through a raised floor.

Oregon’s cool, dry climate was a key factor in Facebook’s decision to locate its facility in Prineville. “It’s an ideal location for evaporative cooling,” said Jay Park, Facebook’s Director of Datacenter Engineering.  The temperature in Prineville has not exceeded 105 degrees in the last 50 years, he noted.

The air enters the facility through an air grill in the second-floor “penthouse,” with louvers regulating the volume of air. The air passes through a mixing room, where cold winter air can be mixed with server exhaust heat to regulate the temperature. The cool air then passes through a series of air filters and a misting chamber where a fine spray is applied to further control the temperature and humidity. The air continues  through another filter to absorb the mist, and then through a fan wall that pushes the air through openings in the floor that serve as an air shaft leading into the server area.

“The beauty of this system is that we don’t have any ductwork,” said Park. “The air goes straight down to the data hall and pressurizes the entire data center.”

Racks and Servers

The cool air then enters the custom racks, which are enclosed in a hot-aisle containment system. The racks are “triplet” enclosures that house three openings for racks, each housing 30 of the 1.5U Facebook servers. Each enclosure is also equipped with two rack-top switches to support high network port density.

The servers are customized to eliminate waste. The 1.5U chassis (2.65 inches) reflects this bare bones approach. “We removed anything that didn’t have a function,” said Amir Michael, a hardware Engineer at Facebook. “No bezels or paints. The slightly taller chassis allowed us to use taller heat sinks. We’re also able to use larger 60 millimeter fans rather than 40 millimeter fans. The 60 millimeter fans are more efficient.”

The cabling and power supplies are located on the front of the servers, so Facebook staff can work on the equipment from the cold aisle, rather than the enclosed, 100-degree plus hot aisle. That’s a result of a collaborative approach in which hardware engineers worked closely with data center tech staff, Michaels said.

“We had a server integration party with beer, chicken wings and servers,” said Michael. Staff from both departments took turns seeing how fast they could break down and rebuild the servers. “We got a lot of feedback and we ended up with a server that can be assembled very quickly.”

UPS and Power Distribution

One of the areas Facebook targeted for special attention was power distribution, where traditional data center designs with a centralized UPS (uninterruptible power supply) see power losses due to multiple AC-to-DC conversions.  “We paid a lot of attention to the efficiency of this power design,” said Michael.

Facebook’s servers include custom power supplies that allow servers to use 277-volt AC power instead of the traditional 208 volts. This  allows power to enter the building at 400/277 volts and come directly to the server, bypassing the step-downs seen in most data centers as the power passes through UPS systems and power distribution units (PDUs). The custom power supplies were designed by Facebook and built by Delta Electronics of Taiwan and California-based Power One.

But what about UPS system?  Facebook contemplated  installing on-board batteries on its servers, but settled on in-row UPS units. Each UPS system houses 20 batteries, with five strings of 48 volt DC batteries. Facebook’s power supplies include two connections, one for AC utility power and another for the DC-based UPS system. The company has systems in place to manage surge suppression and deal with harmonics (current irregularities).

Here’s a look at a graphic of the data center design for Prineville, released as part of the Open Compute Project:

A look at the data center design for the new Facebook data center in Prineville, Oregon (click for larger image).

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

10 Comments

  1. Excellent article and these data centers look great. Thanks for sharing.

  2. I actually say a video for this earlier today. Pretty interesting, they have been working on this for a while now and had a rather larger team dedicated to coming up with these new servers.

  3. no such thing as 277V direct to servers that can be operated like an IEC plug, as UL requires all voltage above 250VAC to be hardwired and operated by a certified electrician. 277V is phase to neutral 480V, and the article states 400/277 so likely it is 400/230VAC (3ph, and 1 phase to neutral. Could also be 415/240V (3phase and 1 Phase to neutral) which is the same thing manufacturers in IEC and NEC countries have been experimenting with. Can't imagine FB would bild a non-UL compliant DC, but stranger things have happened.

  4. BigDogStudioX

    Very surprised they use both intel and amd

  5. Another emerging term is "Building Enclosure". It serves as the outer shell to help maintain the indoor environment (together with the mechanical conditioning systems) and facilitate its climate control.

  6. LF

    I'm not sure I understand how continuous power is provided with the configuration in case of a power outage. Rack mounted UPS' are limited and an outage of several hours or more will result in the DC going down. Can someone explain how Facebook has addressed the risk?