May 29th, 2012 By: Rich Miller
We continue with page 4 of the Facebook Data Center FAQ (or Everything You Ever Wanted to Know About Facebook’s Data Centers).
How Energy Efficient Are Facebook’s Data Centers?
Facebook’s Prineville data center operates at a Power Utilization Effectiveness (PuE) measurement for the entire facility of 1.06 to 1.08, and the company expects a similar performance for its newly-opened data center in North Carolina. The PUE metric (PDF) compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion. An average PUE of 2.0 indicates that the IT equipment uses about 50 percent of the power to the building.
The cool climate in Prineville will allow Facebook to operate without chillers, which are used to refrigerate water for data center cooling systems, but require a large amount of electricity to operate. With the growing focus on power costs, many data centers are designing chiller-less data centers that use cool fresh air instead of air conditoning. On hot days, the Prineville data center is designed to use evaporative cooling instead of a chiller system.
In its cooling design, Facebook adopted the two-tier structure seen in several recent designs, which separates the servers and cooling infrastructure and allows for maximum use of floor space for servers. Facebook opted to use the top half of the facility to manage the cooling supply, so that cool air enters the server room from overhead, taking advantage of the natural tendency for cold air to fall and hot air to rise – which eliminates the need to use air pressure to force cool air up through a raised floor.
Oregon’s cool, dry climate was a key factor in Facebook’s decision to locate its facility in Prineville. “It’s an ideal location for evaporative cooling,” said Jay Park, Facebook’s Director of Datacenter Engineering. The temperature in Prineville has not exceeded 105 degrees in the last 50 years, he noted.
The air enters the facility through an air grill in the second-floor “penthouse,” with louvers regulating the volume of air. The air passes through a mixing room, where cold winter air can be mixed with server exhaust heat to regulate the temperature. The cool air then passes through a series of air filters and a misting chamber where a fine spray is applied to further control the temperature and humidity. The air continues through another filter to absorb the mist, and then through a fan wall that pushes the air through openings in the floor that serve as an air shaft leading into the server area.
“The beauty of this system is that we don’t have any ductwork,” said Park. “The air goes straight down to the data hall and pressurizes the entire data center.”
Testing at the Prineville facility has laid the groundwork for adapting fresh air cooling extensively in North Carolina, even though the climate is warmer than in Oregon. “Comparing our first phase of Prineville with how we plan to operate Forest City, we’ve raised the inlet temperature for each server from 80°F to 85°, 65% relative humidity to 90%, and a 25°F Delta T to 35°,” wrote Yael Maguire on the Facebook blog. “This will further reduce our environmental impact and allow us to have 45% less air handling hardware than we have in Prineville.”
The Delta T is the difference between the temperature in the cold aisle and hot aisle, meaning the hot aisles in Facebook’s new data center space will be as warm as 120 degrees – not a pleasant work environment for data center admins. Mindful of this, Facebook designed its Open Compute Servers with cabling on the front of the server, allowing them to be maintained from the cold aisle rather than the hot aisle. The contained hot aisles in Prineville are unlit, as the area was not designed to be staffed.
Why is Greenpeace Criticizing Facebook?
Facebook’s Oregon data center has been designed to be highly energy-efficient, but is also located in a town where the local utility that uses coal to generate the majority of its power. This fact was soon highlighted by environmental blogs and even a Facebook group. The environmental group Greenpeace International called on Facebook to rethink plans for its Oregon data center and find a way to run the facility entirely on renewable energy.
“Given the massive amounts of electricity that even energy-efficient data centers consume to run computers, backup power units, and power related cooling equipment, the last thing we need to be doing is building them in places where they are increasing demand for dirty coal-fired power,” Greenpeace said in a statement, which was published on its web site. “Facebook and the cloud should be run on clean renewable energy … Facebook could and should be championing clean energy solutions, and not relying on the dirty fuel sources of the past to power their new data center.”
Facebook, which has touted the energy efficiency of the Prineville facility, has responded at length to the issue, both on Data Center Knowledge and directly to Greenpeace.
“It’s true that the local utility for the region we chose, Pacific Power, has an energy mix that is weighted slightly more toward coal than the national average (58% vs. about 50%),” Facebook’s Barry Schnitt said. “However, the efficiency we are able to achieve because of the climate of the region minimizes our overall carbon footprint. Said differently, if we located the data center most other places, we would need mechanical chillers, use more energy, and be responsible for an overall larger environmental impact—even if that location was fueled by more renewable energy.”
The Greenpeace critiques continued throughout 2011. in December 2011, Greenpeace and Facebook today announced a truce, with Facebook agreeing to prioritize the use of renewable energy for its data centers, and Greenpeace suspending a social media campaign that targeted the social network. The two organizations also said they will collaborate on the promotion of green energy sources and encourage major utilities to develop renewable energy generation. As part of the agreement, Facebook said it will seek to power its new data centers using clean and renewable energy. The company has already taken a major step in this direction in its latest data center project in Sweden, which will be powered primarily by renewable energy,
What’s Different About Facebook’s Data Center in Sweden?
Facebook is taking a different approach to the electrical infrastructure design at its new data center in Sweden, reducing the number of backup generators by 70 percent. Facebook says the extraordinary reliability of the regional power grid serving the town of Lulea allows the company to use far fewer generators than in its U.S. facilities.
Using fewer generators reduces the data center’s impact on the local environment in several ways. It allows Facebook to store less diesel fuel on site, and reduces emissions from generator testing, which is usually conducted at least once a month.
Local officials in Lulea say there has not been a single disruption in the area’s high voltage lines since 1979. The city lies along the Lulea River, which hosts several of Sweden’s largest hydro-electric power stations. The power plants along the river generate twice as much electric power as the Hoover Dam.
“There are so many hydro plants connected to the regional grid that generators are unneeded,” said Jay Park, Facebook’s Director of Datacenter Engineering. “One of the regional grids has multiple hydro power plants.”
Park says Facebook has configured its utility substations as a redundant “2N” system, with feeds from independent grids using different routes to the data center. One feed travels underground, while the other uses overhead utility poles.
For those interested in more detailed information, here are links to PDFs and videos of presentations about Facebook’s Infrastructure and operations from members of the Facebook Engineering team.
- Facebook Engineering Front-End Tech Talk (August 2010)
- A Day in the Life of A Facebook Engineer (June 2010)
- IPv6 at Facebook (June 2010)
- Rethinking Servers & Datacenters (November 2009)
- High Performance at Massive Scale at Facebook (Oct. 2009)
- Facebook’s Bandwidth Requirements ( Sept. 2009)
- Memcached Tech Talk with Mark Zuckerberg (April 2009)