An example of the unusual fiber density at the SuperNAP in Las Vegas.

Custom Infrastructure Powers the SuperNAP

Add Your Comments

Switch Communications CEO Rob Roy with one of the WDMD custom cooling units at the Las Vegas SuperNAP during a tour last year.

Switch Communications CEO Rob Roy with one of the WDMD custom cooling units at the Las Vegas SuperNAP during a tour last year.

LAS VEGAS -High on a narrow catwalk alongside the massive cooling units at the SuperNAP, a security guard stops to open one of the four doors lining the side of the unit. As the door opens, a powerful blast of air streams out. “You have to remember to hold on to keep from getting blown off,” said Melissa Young, the Executive VP of Sales Engineering at the SuperNAP, a 407,000 square foot data center facility built by Switch Communications.

The cooling unit is a WDMD – short for Wattage Density Modular Design – a custom-built unit housed outside the data center that can automatically switch between four different cooling options to deliver the most efficient cooling for current conditions. Young says the WDMDs are “built by Switch, for Switch” and not available from any vendor.

The units are part of the customized power and cooling infrastructure at the SuperNAP, where Switch also builds its own power distribution units (PDUs) and remote power panels. Young says the SuperNAP’s generators are also customized to Switch’s specifications by  Detroit Diesel.  

At a time when many large data center builders are focused on the industrialization of data center construction using standardization and bulk purchasing from vendors, Switch is charting a different path, building custom equipment to fit its vision for high-density data centers supporting power loads of 1,500 watts a square foot and beyond. It’s a philosophy also seen at Google, which builds its own servers, containers and networking gear.

Growth Beyond Las Vegas?
Can this model scale beyond Switch’s hugely successful data center operation in Las Vegas? Young says Switch is scouting prospective data center sites in other markets, but has yet to decide whether to pursue projects outside Las Vegas.

supernap-2

“We are talking to a number of people about either having us build data centers for them or licensing our technology,” said Young.  

Much of the company’s expertise in extreme-density infrastructure could work in other markets and facilities. But the secret sauce supporting the Switch Supernap goes beyond custom infrastructure. The desert climate and unusually rich connectivity and bandwidth economics have also been huge factors in Switch’s success, and are less portable to other venues.

Fitting Out the SuperNAP 
Bur first, there’s Vegas and the SuperNAP. With the first 45,000 square foot pod nearing capacity, Switch has completed the fit-out of a second pod and says it expects a significant portion of the new space to be filled by large requirements from existing tenants. “We didn’t figure we’d need Sector 2 until early next year,” Young says.

The building includes six of these pods. “Once this building is full of gear, it will be the most densely-packed data center in the world,” she said. As the SuperNAP nears capacity,  Switch plans to build two more facilities on adjacent property.

More Power, Fewer Racks
Young says the SuperNAP’s ability to pack larger workloads into fewer racks offers a compelling value for customers with “Internet-scale” large workloads. “The power and cooling configuration really does change the equation for our customers,” said Young. “The density allows you fewer racks, and companies usually do a (hardware) refresh to take advantage of that.”

Young said many customers find the high-density installations also save on cabling, since there are fewer racks and no need for additional spacing between racks to avoid hot spots.

She said most SuperNAP customers are running equipment at between 8 kilowatts and 17 kilowatts a rack, with one customer at 24 kilowatts. With 100 megawatts of power for the facility, Young said Switch expects to ultimately be able to support 7,000 racks, although that number might edge lower if rack power densities increased. “100 megawatts is a lot of power, but it’s a finite number,” she said. “It’s not a tether to the sun.”

RELATED STORIES:

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)