Skip navigation

1,500 Watts A Square Foot? A Look at TSCIF

Switch Communications says it is cooling its Las Vegas data center at nearly 1,500 watts per square foot using air cooling. How are they accomplishing this?

Switch Communications says it is successfully cooling a section of its Las Vegas data center running at nearly 1,500 watts per square foot using air cooling. How are they accomplishing this?

The key to Switch's high-density cooling is a design known as Thermal Separate Compartment in Facility (TSCIF), according to company co-founder Rob Roy. The ingredients in this approach include high-capacity AC units placed outside the data center area, and a tightly integrated hot aisle containment system for the racks. Here's an overview:

  • The cabinets are set on a slab, with no raised floor.
  • Chilled air is delivered into the cold aisle near the ceiling rather than through the floor, and enters the cabinets through the front.
  • Each cabinet fits into a slot in the TSCIF unit, which encapsulates the rear and sides of each cabinet, while the open front extends beyond the enclosure.
  • The hot aisle containment system delivers waste heat back into the ceiling plenum, where it can be returned to the chiller.

Some photos of the TSCIF system can be seen here, and more images and diagrams are available on the Switch web site. A number of data center providers forego a raised floor for overhead cooling, most notably Equinix (EQIX). Heat containment systems are also becoming more widely used.

Switch says the combination of those techniques, along with custom cooling equipment, enables it to handle unusually high power and heat loads. Roy says the data center cold aisle is maintained at 68 degrees, while the temperature in the hot aisle reaches well above 100 degrees, creating a heat differential of nearly 40 degrees.


The key benchmark in designing the system, according to Roy, is the cubic feet per minute (CFM) of cooling that can be pushed into the equipment area. "It's the most important piece to do high density," said Roy, who said he focused on CFM performance in designing the air conditioning systems for Switch. That TSCIF system will also be used in the company's new SuperNAP, a 407,0000 square foot data center that will be supported by 30,000 tons of redundant (system-plus-system) cooling and 30 cooling towers, with a capacity of 4.5 million CFM.

The cooling benchmarks for the Switch facilities are bound to receive scrutiny as the company prepares to open the SuperNAP. There are already some in the industry who are skeptical of Roy's claims, according to coverage by Ashlee Vance at the Register.

The Register also noted that some high profile technology executives have vouched for Switch's performance, including Sun Microsystems Chairman Scott McNealy and executives at Cisco Systems (CSCO). "In my opinion Switch has the finest data centers available anywhere," David Matanane, the senior manager of hosted services at Cisco Systems, told The Register. We spoke with one industry source, who is familiar with the Las Vegas data center scene. "Rob Roy is the real deal," he told us.

One thing is for certain: at a time when many data center operators are struggling to cool high-density server installations in their facilities, the technologies that Switch Communications is rolling out for its Las Vegas SuperNAP will prompt additional discussion about new approaches to cooling.

TAGS: Switch
Hide comments