LAS VEGAS – Once you’ve built the mighty SuperNAP, what do you do for an encore? If you’re data center provider Switch, you build a better SuperNAP right next door.
The debut of the SuperNAP data center in 2009 put Switch on the map in a big way. At more than 400,000 square feet, the SuperNAP offered unprecedented scale and the ability to support extreme power density. The facility hosts servers and storage for many of the world’s leading technology companies, including more than 40 cloud computing companies and a dense concentration of network carriers.
The company’s newest creation, known as SuperNAP 8, builds on that foundation with a number of innovations in cooling and reliability. The building has just become the first multi-tenant data center to earn Tier IV Constructed Facility certification, the highest rating possible under The Uptime Institute’s ratings for mission-critical reliability.
For Switch founder and CEO Rob Roy, SuperNAP 8 is the culmination of a decade-long effort to rethink the data center. The design for SuperNAP 8 can operate effectively in any climate, providing an ultra-efficient template for global growth. Switch is finalizing plans for an international expansion, with details to be announced later this year.
“We’ve really been focused on creating the world’s best data center,” said Roy, who has patented many of the design innovations at Switch. “SuperNAP 8 is the end game of that effort. I’ve wanted to see if we could create one global standard for our data centers.”
First Tier IV Colocation Facility
The effort has made an impression on The Uptime Institute, which has evaluated data centers around the world for its Tier certification program. Only four finished data centers in the U.S. have ever earned Tier IV for a finished facility, the highest certification level, and until now all have been single-tenant financial services data centers.
“The first Tier IV Facility Certification in the colocation sector speaks for itself: another world-class accomplishment,” said Ed Rafter, Vice President of Technology for The Uptime Institute. “Switch SuperNAP 8 has incorporated a number of well-planned and innovative solutions for their facilities infrastructure requirements.”
SuperNAP 8 is the next step in Roy’s vision for a massive technology ecosystem in Las Vegas. Switch now has more than 1,000 customers and 315 employees, and its projects keep 1,000 construction workers employed. The 300,000 square foot SuperNAP 8 facility is built several hundred yards from the original SuperNAP (now known as SuperNAP 7).
SuperNAP 8 was built using pre-fabricated modular components manufactured by Switch. The major building block is known as a MacroMOD, and includes two data halls. Switch is installing customers in the first two data halls, which represent half of the building’s total capacity.
So what’s different about SuperNAP 8? Data Center Knowledge recently had a tour of the new facility, which features the same combination of density and efficiency seen at SuperNAP 7, which operates at a full-year Power Usage Efficiency (PUE) of 1.18. That puts its efficiency nearly on par with Google, which has a full-year PUE of 1.12 for its fleet of data centers.
This level of efficiency is unusual for a multi-tenant facility, which has less flexibility in pushing the boundaries of server inlet temperature. Switch operates the SuperNAPs’ server halls at 69 degrees and 40 percent humidity, while hyperscale players like Google and Facebook can push temperatures closer to 80 degrees.
A high-level change in the new design is how the data center is organized. At SuperNAP 7, a massive power spine runs down the center of the building, with data halls and power rooms on each side. At SuperNAP 8, all the power rooms are together along the perimeter of one side of the building, with the power spine alongside.
The data halls are now together in the remainder of the interior space, with the exterior cooling units lining the far side of the building. This diagram provides a cross-section of the facility, showing the placement of (from left to right) the generators, power rooms, power spine, data halls, and cooling units.
Separating the power equipment from the servers and the cooling units provides additional reliability, limiting the potential for problems should the electrical gear.
Pages: 1 2