A Closer Look at Dell's Cloud Design

5 comments

Dell has been doing some interesting things in its Data Center Solutions Group, but much of its work is focused on custom solutions. Chief among them is Dell’s data center container product, a double-decker unit housing IT equipment in one 40-foot shipping container and power and cooling infrastructure in another.

Dell’s Jimmy Pike gave a presentation about the Dell container at last week’s CloudWorld/Next Generation Data Center Conference in San Francisco. Pike,  the director of system architecture at Dell Data Center Solutions, spoke about using containers to build a “cloud optimized” data center.

Pike described a design approach that echoes many ingredients of Microsoft’s Generation 4 Modular Data Center design. Executives from Dell have said that Microsoft will use Dell’s containers in its new Chicago data center, although Microsoft is also know to have tested containers from other vendors as well.

Here are some of the key features of the ”closed-loop” efficiency model applied to entire data centers:

  • Minimalist Building: Physical structure focused on providing shelter and physical security. That includes no raised floor, with overhead cabling and power.
  • Cooling: Using outside air (free cooling) whenever possible, along with hot-aisle containment to prevent  mixing hot and cold air.
  • Run Warmer: The input air temperature should be as high as 95 degrees F (35 degrees C), with exhaust air running as high as 120 degrees F in the hot aisle.
  • Power: Use a Tier 1 or Tier 2 power infrastructure, simplifying to a single path for distribution. Use AC power with voltage “as high as the equipment will allow” and avoid placing alternative power sources in-line through the use of “side-looking” UPS systems.
  • Equipment: Use servers with high-efficiency power supplies and variable speed fans to take advantage of lower input temperatures.

This results in a “closed loop” system that achieve a Power Usage Effectiveness (PUE) as low as 1.28 in humid climates and 1.11 in arid climates. Pike said this design approach was developed with a scale-out cloud computing data center in mind, and would not be appropriate for many other types of data centers.

“Use of these principles does deviate from what is considered traditional enterprsie norm, but will provide the most cost-effective system attainable,” Pike notes in his presentation.

About the Author

Rich Miller is the founder and editor-in-chief of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

5 Comments

  1. What does one of these babies cost?

  2. someone

    Well over $1M I would guess, before you put the servers in. But that's a whole lot less than building a new datacenter, and 24x54U standard racks is a lot of capacity.

  3. For the same IT load, containers can be ~20% less CAPEX and annual OPEX than a new than a new brick and mortar. They compare favorably against colocation as well, becoming cash flow positive in less than 3 years. Plus, you're not locked into a 5, 10 or 15 year contract. And you can get much of your power & cooling infrastructure upgraded every time you swap out IT by also swapping out the container at the same time. But they aren't perfect for every situation.