The Industrialization of Data Centers

1 comment

When it comes to data center design, the form factor has been getting smaller. The wide-open “barn” layouts of the dot-com boom have yielded to smaller pod architectures, while some vendors and end-users are now optimizing designs around shipping containers.

The focus on smaller spaces provides greater flexibility, but also allows data center builders to standardize many elements of the process, enabling an “industrialization” of data center design. That term has been adopted recently by the world’s two largest data center operators – Digital Realty Trust and IBM.

“We’re really seeing the standardization really resonating with a lot of customers,” said Jody Cefola, Site and Facilities Services marketing Manager for IBM. “We liken it to what’s happened in IT, where companies have gone to standardized operating environments in their software.

“You want the data centers to be designed and operated so that you know you’ll have a similar environment in your data centers in different places,” said Cefola. “It becomes less complex because you’re using the same standard operating environment. It’s much easier than if everything is one-off. We’re seeing the desire to adapt to change and to the global environment.”

The trend has been driven in part by the need to conserve capital by building large footprints in phases, while compartmentalizing space to support different power and cooling loads, and in some cases offer dedicated power infrastructure to customers in multi-tenant facilities.

But standardization also offers enormous advantages to data center builders, allowing designers to develop repeatable approaches to many elements of the construction process.

Standardization allows companies to build data centers faster and cheaper, according to Chris Crosby, Senior Vice President of Digital Realty Trust (DLR). Rather than reinventing the wheel with a custom approach to every new facility, Digital’s pod-based design allows it standardize on generators, transformers and power topologies.

Digital Realty can then lower costs in its supply chain through volume orders of these components, and also have a “buffer inventory” of critical items with long delivery timelines – especially diesel generators. This is critical, Crosby says, because it eliminates costly delays in project completion.

Digital Realty builds out its Turn-Key Datacenter space using a pod architecture that divides each property into compartmentalized data centers of between 8,000 and 12,000 square feet. It also standardizes on the UPS kilowatt as a production unit, focusing on the cost per kilowatt in analyzing costs in diffferent markets. 

Using this approach, Digital Realty was able to convert powered shell space into a finished raised floor data center in 16 to 20 weeks in 2007. The company is shooting for a completion time of 20 weeks for its 2008 projects, and hopes to eventually streamline the process to 16 weeks.

In June IBM introduced a family of modular data centers featuring standardized, repeatable designs for several modular “form factors,” including a 5,000-square-foot module for enterprise customers, data center container products in both 20-foot and 40-foot shipping containers, and a 200-square-foot module that allows users to quickly create a high-density zone within a low-density data center.

“It’s very clear that data center design and build has to change dramatically,” said Steve Sams of IBM Global Technology. “We have to change the model that’s been used over the last 20 years to really design in scalability.”

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

One Comment

  1. Gary Olson

    I don't know why it's surprising that it is faster and more reliable to do common designs that are fully tested and verified--it makes sense that this approach would be faster, lower cost, and more reliable than the custom approach. The only question is why anyone would want to do it any other way...