A Look Inside the Vegas SuperNAP
Eight hundred racks is a lot of servers. For most data centers, having orders for 800 racks before a facility even opens would create a capacity problem. But not for the SuperNAP, 407,000 square-foot data center in Las Vegas built by Switch Communications Inc.
When the first phase of the SuperNAP opens on Sept. 1, it will be one of the world’s most unique data centers, with the ability to cool racks exceeding 20kW of power load. When the facility is completed, it will cost more than $300 million and be able to host 7,000 customer servers.
The SuperNAP will have no raised floor, no computer room air conditioning units (CRACs) inside the data center, and no use of liquid cooling – in fact, virtually no water in the entire building. The massive facility is the ultimate expression of an alternate view of high-density data center design, formulated by Switch Communications CEO and co-founder Rob Roy.
“My feeling is that when people see this, they’ll say that this is the answer going forward,” said Roy. “With our new design, we may be able to get to 2,000 watts per square foot. We’re very excited about what we’re doing.”
Data Center Knowledge recently got an inside look at Switch Communications’ Las Vegas operation, including the ultra-high density hosting area of its existing SwitchNAP facilities, where several prominent Internet companies are running banks of racks at 1,500 watts a square foot using Switch’s high-density T-SCIF heat management system (short for Thermal Separate Compartment in Facility). See this video for a look inside a T-SCIF for Sun Microsystems, which hosts its Network.com utility computing platform at Switch.
We also had a tour of the SuperNAP facility, which was in the late stages of construction, and got a look at the custom central cooling units that Roy says will take air cooling to unprecedented levels of efficiency and flexibility. The units, known as WDMDs (for Wattage, Density, Modular Design) have four coils to allow different approaches to cooling in different conditions. See videos of our walk-through at the SuperNAP and a closer look at the WDMD cooling units.
Until very recently, Switch has flown under the radar in the data center industry. After starting with a small facility tucked between stores in a south Las Vegas strip mall, Switch has quietly built and filled six data centers in Las Vegas with a list of marquee customers.
The SuperNAP marks the coming-out party for Switch, which will now use the enormous footprint to offer its high-density solutions to a broader world of clients seeking to solve difficult power and cooling problems.
The SuperNAP takes Switch’s data center design concepts to the next level. Roy believes the facility will set a new standard for data center management, and expects that its innovations will ultimately be widely adopted by competitors.
“We expect that our competitors will try to replicate our design,” says Roy. “But they can’t build one like it for another two years. So we have two to three years, and by then we will have 1 million square feet. We have the ability to build three more of these on this site, each at 400,000 square feet.”
Does the SuperNAP represent the future of high-density computing? Or is it a unique opportunity enabled by Switch’s unusual bandwidth access and the Las Vegas climate?
Switch which may eventually explore opportunities in other markets, but not before it builds out its full footprint in Las Vegas, where it has advantages in climate and fiber access that aren’t easily duplicated. Roy says bandwidth pricing and telecom relationships are key differentiators for Switch, and that volume pricing has allowed the company to attain significant savings for customers.
In 2002 Switch acquired a former Enron Broadband Services facility in Las Vegas, which had exceptional connectivity due Enron’s efforts to build a commodity bandwidth exchange. Switch says it now has more than 20 backbones running through its bandwidth hub.
“Connectivity wise, there’s not another building in America that comes anywhere near what we can do,” said Roy, who said Switch’s access to fiber backbones is “like pulling up to the Alaska pipeline to get your gasoline.”
“We have direct relationships with (connectivity providers) worldwide,” said Roy. “We’ve really created eight years of amazing relationships with these guys. Those tools are important in the data center and it doesn’t come up that much in our industry. When you go to industry events, all everyone talks about is the infrastructure in your data centers.” Continue on next page …
nottlvPosted August 13th, 2008
“…Switch says it now has more than 20 backbones running through its bandwidth hub. Connectivity wise, there’s not another building in America that comes anywhere near what we can do,” said Roy…”
They also make the same claim on their site:
“No facilities in the United States have more in building on-net national backbone connections from seperate tier-1 providers as the Nevada NAPs in Las Vegas.”
I have a hard time believing this. The list of “20 backbones” includes several nobodies and is missing 2 of the 7 tier 1 carriers (GlobalCrossing and NTT/Verio). And to my knowledge, there are several large Tier 2 carriers that don’t have any presence in Las Vegas at all (Internap, Peer1, Mzima, etc.). If you look at someplace like Equinix Ashburn or Equinix Chicago, where they have dozens and dozens of carriers onnet, I have a hard time believing these claims.