LAS VEGAS - Once you've built the mighty SuperNAP, what do you do for an encore? If you're data center provider Switch, you build a better SuperNAP right next door.
The debut of the SuperNAP data center in 2009 put Switch on the map in a big way. At more than 400,000 square feet, the SuperNAP offered unprecedented scale and the ability to support extreme power density. The facility hosts servers and storage for many of the world's leading technology companies, including more than 40 cloud computing companies and a dense concentration of network carriers.
The company's newest creation, known as SuperNAP 8, builds on that foundation with a number of innovations in cooling and reliability. The building has just become the first multi-tenant data center to earn Tier IV Constructed Facility certification, the highest rating possible under The Uptime Institute's ratings for mission-critical reliability.
For Switch founder and CEO Rob Roy, SuperNAP 8 is the culmination of a decade-long effort to rethink the data center. The design for SuperNAP 8 can operate effectively in any climate, providing an ultra-efficient template for global growth. Switch is finalizing plans for an international expansion, with details to be announced later this year.
"We’ve really been focused on creating the world’s best data center," said Roy, who has patented many of the design innovations at Switch. "SuperNAP 8 is the end game of that effort. I’ve wanted to see if we could create one global standard for our data centers."
First Tier IV Colocation Facility
The effort has made an impression on The Uptime Institute, which has evaluated data centers around the world for its Tier certification program. Only four finished data centers in the U.S. have ever earned Tier IV for a finished facility, the highest certification level, and until now all have been single-tenant financial services data centers.
"The first Tier IV Facility Certification in the colocation sector speaks for itself: another world-class accomplishment,” said Ed Rafter, Vice President of Technology for The Uptime Institute. "Switch SuperNAP 8 has incorporated a number of well-planned and innovative solutions for their facilities infrastructure requirements."
SuperNAP 8 is the next step in Roy's vision for a massive technology ecosystem in Las Vegas. Switch now has more than 1,000 customers and 315 employees, and its projects keep 1,000 construction workers employed. The 300,000 square foot SuperNAP 8 facility is built several hundred yards from the original SuperNAP (now known as SuperNAP 7).
SuperNAP 8 was built using pre-fabricated modular components manufactured by Switch. The major building block is known as a MacroMOD, and includes two data halls. Switch is installing customers in the first two data halls, which represent half of the building's total capacity.
So what's different about SuperNAP 8? Data Center Knowledge recently had a tour of the new facility, which features the same combination of density and efficiency seen at SuperNAP 7, which operates at a full-year Power Usage Efficiency (PUE) of 1.18. That puts its efficiency nearly on par with Google, which has a full-year PUE of 1.12 for its fleet of data centers.
This level of efficiency is unusual for a multi-tenant facility, which has less flexibility in pushing the boundaries of server inlet temperature. Switch operates the SuperNAPs' server halls at 69 degrees and 40 percent humidity, while hyperscale players like Google and Facebook can push temperatures closer to 80 degrees.
A high-level change in the new design is how the data center is organized. At SuperNAP 7, a massive power spine runs down the center of the building, with data halls and power rooms on each side. At SuperNAP 8, all the power rooms are together along the perimeter of one side of the building, with the power spine alongside.
The data halls are now together in the remainder of the interior space, with the exterior cooling units lining the far side of the building. This diagram provides a cross-section of the facility, showing the placement of (from left to right) the generators, power rooms, power spine, data halls, and cooling units.
Separating the power equipment from the servers and the cooling units provides additional reliability, limiting the potential for problems should the electrical gear.
For SuperNAP 8, Switch has super-sized its versatile custom cooling units, which now each provide 1000 tons of air handling (as opposed to 600 tons for the units at SuperNAP 7). The units are slightly narrower so they can all fit on one side of the building, but the overhead ductwork within the SuperNAP has been widened to accommodate more cubic feet per minute (CFM), as have the cold and hot aisles of Switch's custom containment system.
The cooling units, who are housed outside the building, are unusually versatile, supporting six different modes of cooling. The software that manages the system selects the most efficient cooling method based on the exterior temperature and other conditions. The new units at SuperNAP 8 feature distinctive hoods, a reflection of Switch's ambitions for expansion into new geographic markets. The hoods will protect the units from ice and snow accumulations in colder climates, and also allow Switch to use exhaust air from the data center's hot aisles to melt snow.
Flywheels Boost Cooling Reliability
A new wrinkle at SuperNAP 8 is the Rotofly system, which uses 2,000 pounds of rotary flywheels to provide extended runtime for each HVAC unit. In the event of a power outage, this capability ensures that the cooling units will continue to move air through the data halls.
The cooling improvements extend inside the data center, in the form of a steel framework known as the Black Iron Forest. The steel serves a dual role, providing physical support for Switch's aisle containment system (known as a T-SCIF) and helping to cool the data centers by serving as thermal storage, chilling the air around it to help cool the room and provide a cushion during cooling failures.
"When it’s 69 degrees, that iron stays at 69 degrees for a long time," said Roy. "They’re like thermal radiators for cold. They’re heavier than they need to be on purpose so they can retain that temperature. That steel keeps that environment much cooler."
Roofs may not seem like a sexy data center feature. But Roy says that roofing will play a critical role in the life span of a data center. "Ninety-nine percent of data centers will need to have their roofs replaced during their useful life, and the data center is incredibly vulnerable while that’s happening," said Roy.
That's why SuperNAP 8 features SwitchSHIELD, a double-roof system that can protect the data center from wind speeds of up 200 miles per hour. The two roof decks are located nine feet apart and are attached to the concrete and steel shell of the facility and contain no roof penetrations. This allows Switch to replace either roof level without any loss of protection for the servers housed in the data hall.
SwitchSHIELD is another feature that has been added with geographic expansion in mind. Tornadoes are exceedingly rare in Las Vegas, and the region has never had a recorded wind speed that would require this level of protection. The double-roof will be a more significant differentiator in markets that are prone to hurricanes and tornadoes.
No Plans to Slow Down
“Rob Roy has set the bar for the industry’s performance and standards,” said Missy Young, Executive Vice President of Colocation at Switch. “With his direction, Switch SUPERNAP will continue to change the landscape of the world’s data center and technology solutions industry to keep businesses running with elite resiliency and innovating within the SUPERNAP ecosystems.
"We’ve never had a client experience an outage, and never had to issue a service credit," said Young. "We have no plans to slow down at all. Our biggest challenge has been getting the buildings up fast enough to meet customer demand."
Indeed, just down the street from SuperNAP 8, construction teams are beginning work on SuperNAP 9, which will be Switch’s largest project yet at 600,000 square feet, and is expected to be completed in the first half of 2015.