A Look Inside the Vegas SuperNAP (Part 2)
The dry Vegas climate is also critical to the efficiency of his facilities cooling operations. “You can really efficiently cool very dry air,” said Roy. “It’s hard to create efficient cooling in places where it truly human. And blending hot and cold air is ridiculous.”
So is Switch a unique, location-specific opportunity? Or can others apply parts of these approaches to improve high density data center design? Roy says Switch has filed 26 patents covering the innovations in its Las Vegas operations.
The company’s T-SCIF design builds upon several existing approaches, combining a slab floor and overhead cooling (a design option seen at Equinix facilities) with complete hot-air containment and a ceiling plenum for hot air return (similar in concept to Oracle’s design of its Austin data center). Switch’s design combines the best of both those approaches, and adds its own refinements. In NAP4, the ceiling plenum returns air outside the data center into a “heat aisle” between server rooms. The CRAC units are inserted into openings in the wall, but turned backwards so they draw air from the heat aisle, cool it and return it into the server area.
“We contain all the heat from day one,” said Roy. “Heat is 100 percent contained and processed into heat ceilings. Cold air is dropped past all 42U servers. We’ve tested and proven this by thermal imaging. The size on the (cold air) duct work can support 24kW of CFM for each rack. It’s completely modular and can be adjusted to suit lower density requirements.”
The innovations at the SuperNAP will be harder to replicate. The entire facility is designed around an air-conditioning system that places the custom WDMD cooling units outside the facility. Roy says that incorporating large openings for cooling required reinforced concrete, making it unlikely to recreate outside of new construction. The building is 1,100 feet long and more than 300 feet wide, with a 30-foot ceiling height that allows for a 15-foot heat ceiling plenum.
A tour of Switch’s data centers shows the evolution of its approach from a traditional design to its latest innovations. As customers sought higher densities, Roy and his team spent a year attending conferences and consulting with industry experts, and were frustrated by the process.
“The designs were based on 10 or 20-year-old patents, and they’re still built today like they were 10 years ago,” said Roy. “We said ‘we have to stop looking to the industry for a solution. We have to build it ourselves.’ We got outside that box, and talk to creative people addressing problems similar to yours.”
In Vegas, that meant the casino industry. “The casinos are extremely unique,” said Roy. “They’re glass buildings in the desert. They put large amounts of research and development and to cooling systems. They all did it a little differently.” Roy said those discussions led him to combine a number of the casinos’ techniques into the design of the WDMD cooling units.
As it implemented its T-SCIF design, Switch began winning deals with major technology companies. Roy is not bashful in asserting the merits of Switch’s infrastructure and design innovations. Some prospects were initially puzzled by the claims they heard from Roy and his team, but soon became believers.
“We were consolidating some of our data center operations, and were fascinated with their proposal,” said Dan Butzer, Sun Microsystems Network.com operation. “We assumed going in that because we would need to cool 20kW racks, we’d wind up with water or some kind of location solution.
“We’ve been very impressed with the technology,” Butzer said of Switch. “They promise a lot, and they deliver a lot. These guys know their stuff. We’re planning to significantly expand our footprint when the SuperNAP opens. They keep their word, and they do what they say they’re going to do. It takes a lot of complexity off my plate.”
Making a believer out of Sun helped Switch win other clients, including some of the Internet’s largest companies. Cisco Systems has also acknowledged being a Switch customer. Roy can’t speak publicly about most of his other clients, but they are companies that are serious about high-density computing.
“We haven’t had a client below 400 watts per square foot for a year,” said Roy.