1,500 Watts A Square Foot? A Look at TSCIF

Switch Communications says it is successfully cooling a section of its Las Vegas data center running at nearly 1,500 watts per square foot using air cooling. How are they accomplishing this?

The key to Switch’s high-density cooling is a design known as Thermal Separate Compartment in Facility (TSCIF), according to company co-founder Rob Roy. The ingredients in this approach include high-capacity AC units placed outside the data center area, and a tightly integrated hot aisle containment system for the racks. Here’s an overview:

  • The cabinets are set on a slab, with no raised floor.
  • Chilled air is delivered into the cold aisle near the ceiling rather than through the floor, and enters the cabinets through the front.
  • Each cabinet fits into a slot in the TSCIF unit, which encapsulates the rear and sides of each cabinet, while the open front extends beyond the enclosure.
  • The hot aisle containment system delivers waste heat back into the ceiling plenum, where it can be returned to the chiller.

Some photos of the TSCIF system can be seen here, and more images and diagrams are available on the Switch web site. A number of data center providers forego a raised floor for overhead cooling, most notably Equinix (EQIX). Heat containment systems are also becoming more widely used.

Switch says the combination of those techniques, along with custom cooling equipment, enables it to handle unusually high power and heat loads. Roy says the data center cold aisle is maintained at 68 degrees, while the temperature in the hot aisle reaches well above 100 degrees, creating a heat differential of nearly 40 degrees.

The key benchmark in designing the system, according to Roy, is the cubic feet per minute (CFM) of cooling that can be pushed into the equipment area. “It’s the most important piece to do high density,” said Roy, who said he focused on CFM performance in designing the air conditioning systems for Switch. That TSCIF system will also be used in the company’s new SuperNAP, a 407,0000 square foot data center that will be supported by 30,000 tons of redundant (system-plus-system) cooling and 30 cooling towers, with a capacity of 4.5 million CFM.

The cooling benchmarks for the Switch facilities are bound to receive scrutiny as the company prepares to open the SuperNAP. There are already some in the industry who are skeptical of Roy’s claims, according to coverage by Ashlee Vance at the Register.

The Register also noted that some high profile technology executives have vouched for Switch’s performance, including Sun Microsystems Chairman Scott McNealy and executives at Cisco Systems (CSCO). “In my opinion Switch has the finest data centers available anywhere,” David Matanane, the senior manager of hosted services at Cisco Systems, told The Register. We spoke with one industry source, who is familiar with the Las Vegas data center scene. “Rob Roy is the real deal,” he told us.

One thing is for certain: at a time when many data center operators are struggling to cool high-density server installations in their facilities, the technologies that Switch Communications is rolling out for its Las Vegas SuperNAP will prompt additional discussion about new approaches to cooling.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.


  1. We have been solving airflow problems in electronics for 12 years, so let me weigh in here. Assuming enough CFM is supplied and the hot/cold air is marginally separated (which based on the description was done here) this design should work just fine. The more oversupply of CFM the less defined the hot/cold separation needs to be. What limits density in most sites is improper or marginal airflow distribution. Mainly on the underfloor supply(cold aisle) but also the return (rack/CRAC placement). That said, you can do any type of containment strategy you wish. But if you don't supply enough CFM, the servers at ends of aisles and top of racks WILL run hot. Wally Phelps www.adaptivcool.com

  2. I agree with Wally, if the separation is maintained, and sufficient air is delivered to the racks then this is no issue. Roy nails it on the head, and it seems that the customer that struggle with density are missing this critical point, you must be able to deliver the air to the load. My quick numbers shows a rack density of around 24kW, assuming a 4 foot cold and hot aisle in a 4 foot rack, discount side aisle space. This has been done, and is being done daily. I would love to see the PUE numbers for this type of layout though. The distance the air must travel plays a big part in the fan power requirements. Daniel Kennedy www.rittal-corp.com