Should Servers Come With Batteries?

14 comments

Will the data center of the future have no central UPS units, and be filled with servers with on-board batteries? The data center team at Facebook believes it should, and is pledging to share its best practices – and perhaps wield some of its clout with vendors and data center operators – as it presses its case for change.   

Facebook recently disclosed its plans to adopt a novel power distribution design pioneered by Google that removes uninterruptible power supply (UPS) and power distribution units (PDUs) from the data center. The new design shifts the UPS and battery backup functions from the data center into the cabinet by adding a 12 volt battery to each server power supply.

While many best practices shared by Google, Microsoft and Facebook can help other data center operators save energy and money, other customizations are impractical.   

Big Companies, Big Innovation 
“A lot of the innovation in the field is being driven by companies with thousands of servers who really care about the efficiency of these things,” said Facebook’s Amir Michael, who previously worked on Google’s data center team. “We have capital to be able to afford engineers to solve these problems. It’s not really benefiting the rest of the industry. Smaller companies who might deploy fewer servers can’t go and design their own systems.”

In discussing Facebook’s plans for on-board batteries, Michael discussed ways these innovations might become more widely available.

“It’s a chicken and the egg problem,” said Michael. “No one really makes a data center without a UPS, and no one makes server with a battery on board. Server manufacturers aren’t going to build a server with a battery on board, because no one has a place to deploy that.”

Facebook’s buying power gives it some influence with hardware vendors. Michael noted that Facebook is working with vendors on power supply customizations, and has gotten little pushback from server vendors on its modifications to motherboards.

“Volumes are large enough that server vendors are helping us with that rather than opposing us,” he said. “We’re actually being supported quite well.”

Not all equipment vendors would endorse an industry shift to servers with on-board batteries, however. Makers of UPS equipment and power distribution units (PDUs) are significant players in discussions of industry best practices, and would be unlikely to advocate designs that reduce demand for those products.

Is there a transition that could lead to more options for innovation in power distribution? Michael suggested potential changes in wholesale data center leasing models.

“One example could be to build a data center where you have a portion that has no UPS,” he said. “The data center operator can charge customers a lower rate to deploy their servers in a part of the facility that doesn’t have a UPS. The customer, if they’re savvy, can go and purchase a server which has a battery on board. They’ll pay a little more up front, but in the long run they’ll save money because they’re paying less to operate that server over a period of time.

“We hope to see the industry move to a model like this,” said Michael. “As a customer that leases space in data centers, I would welcome a change like this.”

Facebook is one of the largest customers in the market for turn-key data center space, and leases space from leading providers like Digital Realty Trust, DuPont Fabros Technology and Fortune Data Centers

Are these cutting-edge energy efficiency strategies only appropriate for large-scale operations like Google and Facebook? Or would enterprises and smaller companies adopt these practices if they had access to them? Facebook says it will be more active in the growing industry conversation about best practices, which it hopes will reveal the answer.  

“It’s no longer okay just to be secretive,” said Michael. “There’s too much at stake.  Smaller companies might use too much of their resources and too much of their capital on their data center infrastructure. They should be allowed to benefit from the same type of optimizations that we’re making here at Facebook.”

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

14 Comments

  1. Take the concept to the logical extreme and we could also shed the datacenter of server cases, along with cabinets. All of these things are needless weight and impediments to proper airflow.

  2. I think the opportunity goes beyond "Smaller companies might use too much of their resources and too much of their capital on their data center infrastructure." We can all get an environmental benefit from data centre consuming less energy to run and cool servers. Not that I could engineering anything more than a lamp, but I'm continually surprised that data centres still take AC make it DC to have it later converted to AC so it can be converted to a lower volt DC.

  3. Frank Sanborn

    These changes in design could be beneficial for people and business who live and operational in less developed parts of the world.. In fact if we were to design our systems to run under condition there. I bet we would have the most efficient designs available at a lower overall cost and and much lower operational cost.

  4. anon

    to answer your question, probably not. this approach is on wrong side of scale curve. physics dictates that these distributed and smaller battery systems are not nearly as efficient as large scale power conditioning and distribution. it does have resilliency benefits, but probably only for virtual servers where three 9's is sufficient. no way that small batteries and power supplies have longevity at load vs industrial scale

  5. I heard the biggest problem with adding batteries to your servers is the fire risk. If a battery goes up (and they do - remember Sony's laptops habit of bursting into flames) then the resulting fire is quite difficult to put out and\or risks taking out the entire DC every time.

  6. Ernie

    Heat from the servers will decrease the life of the batteries. If you have to increase the cooling capacity, what energy have you saved? Most of the new UPS products on the market have an energy efficiency of 93%, I.E. the Liebert NX. New high density servers require very specific cooling add batteries to the mixture and the situation gets a little more complicated. Data Centers that want to go "green" should look into a power efficiency and cooling audit. The audit take into consideration both the present condition and future expansion of the center. Not following cold and hot isle configuration and improper air flow accounts for up to 50% energy loss.

  7. nate

    What about rack level DC-DC UPS systems? Rather than using batteries in each server(curious what their run time would be given the size and the draw of the typical server), with so many DC options available I find it surprising that there aren't DC-DC UPSs. From what I understand the main driver for this is to improve efficiency going from AC(utility)->DC(UPS)->AC(rack)->DC(server)

  8. IThe fire risk of batteries can be minimised by fireproof casing, if the tempature gets too large, the battery ejects and the case closes, with maybe internal CO2 emmision to close the air supply shorting the fire. Problem solved. As for cooling, servers need the cases to minimise dust, but could refrigerated air not be circulated to aid in cooling? (Closed and sealed door allows for smallest area for tempature control)

  9. Saw a couple comments about the AC/DC/AC/DC conversion process. At one ISP I worked for, we colocated our servers in a Sprint telco. For not much extra, we were able to tap their 48V, battery backed power setup. Essentially, we were now on the same rock-solid power that the telphone company used for voice and data circuits. Granted, our servers did not have DC power supplies. We did have to purchase a 48v dc-ac converter to power our rack. But I remember thinking how could it would be to have 12V power supplies. Now, having worked with major data centers, we run a lot of 15-25A circuits. Typically there are three circuits per rack of 110V AC. I contemplate how thick and expensive the wiring was for 48VDC. While we could do a bit smaller at 24V or 12V, it's still a much thicker wire to deal with. It is much harder to deal with DC voltage than it to deal with AC voltage. The voltage drop over distance is one of the big reasons we went with AC to the home. However, telcos have managed for around 100 years operating everything off of DC power. AC to the building perhaps, but then DC voltage for everything internal. We could look to them for experience and apply it to the data centers of tomorrow. This would also lead to supplemental power from wind/water turbines in areas that can support it.

  10. Conor

    This idea doesn't scale operationly or logically. Problem 1: Operational monitoring - the NOC would have to monitor the charge level of every battery. This also means battery discharge tests. 100 racks would be a full time job for a NOC staff member. Problem 2: Different discharge times, a server running at 3A will drain the battery faster than another rack at 1A. Each server in a datacenter would run at a different power consumtion rate. This would mean that uptime would be different for each server. To over come this you would have to interconnect the batteries in each server, which is in effect a centralised solution. Problem 3: The building air conditioning would require a back up anyway to maintain the right temp. So you would only be replicating that system.

  11. Lin

    @Conor The issue isn't to replace the backup power generation, it's to replace the UPS. The batteries in the servers only have to last long enough for the auxiliary generators to come online. Problem 1 - This can be automated and tied to an server health alerting system. Problem 2 - Irrelevant. Each server only has to run for a few minutes. Problem 3 - You'll still need backup power. What you won't need is a centralized (and expensive) UPS. Two different things.