Comparing Cost of a Custom Data Centers

This the third article in series on DCK Executive Guide to Custom Data Centers.

It should be noted that a custom data center design may cost somewhat more than a standard data center. This aspect should be examined closely, a higher initial Capex alone (whether amortized or factored into a lease) should not be the deciding factor alone. It is possible that over the long run it can actually represent a lower Total Cost of Ownership (TCO) if the custom design results in lower operating costs from improved energy efficiency. Data center designs have also been evolving, particularly over the last several years to improve energy efficiency. There have been several new designs involving the use of so-called “Free Cooling”, which can greatly impact the TCO.

Higher Power and Cooling Densities
Most standard general purpose data center designs can accommodate 100 -150 watts per square foot (and/or an average of up to 5 kilowatts per rack). This design is typically based on the use of a raised floor as cool air delivery plenum, coupled with down-flow perimeter cooling units. This design has the inherent advantage of a proven track record with standard cooling equipment and offers the ability to easily accommodate moves, additions and changes by placing (or replacing) floor tiles to meet the heat load of the rows of racks as needed (until the maximum cooling capacity per rack limitation is reached).

Some organizations have moved to significantly higher power density levels, ranging from 10-25 kilowatts per rack. While some data center cooling designs can accommodate more than 5 kilowatts per rack, typically it is available on a limited case by case basis. Most standard designs cannot properly cool large quantities of high density racks across the entire data center. These higher power densities requirements typically are valid candidates for a custom data center.

Designs for Extremely High Energy Efficiency
While good energy efficiency is important to any data center, there are two areas where some new developments are occurring that can significantly improve the energy efficiency of the major infrastructure, but may have some other limitations.

Power Systems
In the US market most data centers will typically use industry standard voltages within the data center; 480 volts AC for the UPS and cooling equipment, which is then stepped down to 208 or 120 volts AC for most IT equipment. However, there are some systems which are beginning to find their way into US data centers which are purported to be more energy efficient than the standard power systems. They generally fall into two categories: First the European type systems which are based on distributing 400/230 volts AC within the data center to power the IT equipment. Since this system can be implemented relatively easily and supports virtually any new IT equipment with no change, it is beginning to make some inroads in the US market.

The second is Direct Current “DC” based systems, which generally fall into two sub-categories; one at 380 volts DC and the others at one or more lower voltages; 48 volts DC (US telephone system standard) and several other variations based on other lower DC voltages. It should be noted that while these DC based systems have been built and are in operation in a limited number of sites, however at this time they generally require specially designed and custom built or modified IT equipment. There are technical and economic pros and cons to all these DC based systems and are still actively debated, but is beyond the scope of this article to explore this in detail. However, before committing to a DC powered design be aware that a DC based system cannot easily or cheaply be retrofitted back support to US standards AC based, off-the-shelf computing equipment, if a universal DC IT equipment standard does not emerge.

It should be noted that while older data centers had much greater losses in their electrical power chain, this was primarily due to older technology UPS systems. The newest UPS systems are far more energy efficient than their predecessors and therefore minimize the energy saving difference that the non-standard power systems offer. Consider this carefully before moving toward a non-standard power system.

Alternate and Sustainable Energy Sources
In most cases the data center simply purchases electricity generated by a utility. The origin of that power has become a source of public awareness and has been criticized by some sustainability organizations, even if the data center itself is a new energy efficient facility. This can impact the public image and reputation of the data center operators. In some cases this has impacted the potential location of the data center, based on the type of fuel used to generate the power, whereas previously those decisions were strictly driven by the lowest cost of power. Some new leading edge data centers have even begun to build solar and wind generation capacity to partially offset or minimize their use of less sustainable local utility generation fuel sources, such as coal. This would certainly fall under the category of a custom design and however it would also change the TCO economics, since it raises the upfront capital cost significantly.

Cooling Systems
Of all the factors that can impact the energy efficiency (and therefore OpEx) cooling represents the majority of facility related energy usage in the data center, outside of the actual IT load itself. The opportunity to save significant amounts of cooling energy by moderating the mechanical (compressor based) requirements and the expanded use of “free cooling” is enormous.

One of the areas where an investment is customization can produce significant OPEX saving is the expanding use of “Free Cooling”. The traditional standard data center cooling system primarily consists of standard data center grade cooling systems (CRAC – CRAH, see part 3 “Energy Efficiency” for more information) typically placed around the perimeter of the room blowing cold air into a raised floor. This is typically a closed loop air path, there is virtually no introduction of outside fresh air. This means that mechanical cooling is the primary method that requires significant energy to operate the compressors to effect heat removal. This is the time tested and most commonly utilized design. Some systems include some form of economizers to lower the amount of annual cooling energy, but few standard systems can totally eliminate the use of mechanical cooling.

However, more recently some data centers have been built using so called “Fresh Air Cooling”, which brings cool outside air directly into the data center and exhausts the warmed air out of the building, whenever outside conditions permit. There are many variations on this methodology and it is still being developed and refined. This method was pioneered and built mostly by Internet giants such as Facebook, Google and Yahoo and would be considered unthinkable only a few years ago for an enterprise class data center. While this is not yet a widespread commonly accepted method of cooling, it is being considered by some more mainstream operators for their own data centers. Of course, its effectiveness is greatly related to climatic conditions and therefore is not ideal for every location. (Please see part 3 “Energy Efficiency”.)

You can download a complete PDF of this article series on DCK Executive Guide to Custom Data Centers courtesy of Digital Realty.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Kevin Normandeau, is a veteran of the technology publishing industry having worked at a variety of technology sites including PC World; AOL Computing; Network World; and International Data Group (IDG). Kevin lives in Massachusetts with his wife and two sons. When he is not in front of the computer (which is most of the time) he likes to get out to ski, hike and mountain bike.

Add Your Comments

  • (will not be published)