Cisco Opens Doors on New Texas Data Center
April 18th, 2011 By: Rich Miller
Cisco Systems has officially opened the doors at its new data center in Allen, Texas, which showcases a number of energy efficiency features and is outfitted with Cisco’s latest technologies for building unified infrastructures for cloud computing applications.
The new data center is about 15 miles north of another Cisco facility in Richardson, Texas. The two data centers will serve as “active-active” mirrors of one another, providing instant failover in the case of an outage at either location. When data is updated at one of the data centers, the changes are instantly synchronized at the other facility.
Three Sets of Paired Facilities
Cisco calls this approach a Metro Virtual Data Center (MVDC). The company plans to consolidate its data centers into three pairs of MVDC production data centers worldwide, which will serve as the building blocks for Cisco IT Elastic Infrastructure Services, the company’s in-house service cloud.
The new data center will use Cisco’s Unified Computing System, which will support the consolidation effort and power Cisco’s in-house cloud. The facility features a holistic data center fabric including Nexus 7000 and 5000 Series switches, Nexus 1000V Virtual Switches, MDS storage networking switches, Data Center Network Manager and NX-OS, a comprehensive data center operating system that spans the Cisco data center portfolio. Cisco says the unified fabric allowed it to save more than $1 million on cabling in the Allen facility.
Here are some other notable features of the new Cisco data center:
- The building was designed to withstand tornado winds up to 175 mph.
- The UPS room (uninterruptable power supply) room in the 5 megawatt data center uses rotary flywheels, which require little energy to continue in motion and start the diesel generators in case of power loss.
- The data center is cooled by an air-side economizer design, which reduces the need for mechanical chilling by using ambient fresh air when the outside temperature is low enough. Cisco calculates the facility can use filtered, outside un-chilled air 65 percent of the time, saving the company an expected $600,000 per year in cooling costs, while contributing to its corporate green goals.
- Cisco also opted to forego a raised floor environment and use overhead cooling and cable management. The overhead cooling ducts drop air into each cold aisle, where it enters the servers and then is vented through a passive chimney system in the rear of each enclosure and into an overhead return plenum. That’s a change from the design in Richardson, which uses a 36-inch raised floor.
- A lagoon captures rainwater to irrigate the indigenous, drought-resistant landscape plants.
- Solar cells on the roof generate 100 kilowatts of power for the office spaces in the building.
- Cisco has submitted the data center for Gold certification by Leadership in Energy and Environmental Design (LEED). Developed by the U.S. Green Board Council, LEED provides builders with a framework for measurable green building design, construction, operations, and maintenance solutions.
Cisco has designed the Allen data center to achieve a Power Usage Effectiveness (PUE) metric of 1.35.
“Any Application in Any Location”
“Our new data center showcases Cisco’s innovation leadership and the data center architectural flexibility to deliver any application, in any location, and any scale in a secure and open manner,” said John McCool, senior vice president, Data Center, Switching, and Services Group, Cisco.
“As critical business assets, data centers today are undergoing rapid technology and architectural changes to meet and respond more rapidly to evolving business goals,” said Soni Jiandani, vice president, Server Access and Virtualization Group, Cisco. “Innovative Cisco technologies like the Cisco Unified Computing System and Nexus product families are helping data centers transform into an agile and efficient networked environment that helps deliver information from any device to any content, anywhere, at any time.”
[...] Showcases Private Cloud DeploymentMyHostNews.com (press release)Web Host Industry Review -Data Center Knowledgeall 27 news [...]
Bob LPosted April 18th, 2011
Are those cabinets ducted? Or are they just dumping the heat high in the room? What brand of cabinet are those?
What brand of gen sets/flywheel system did they use?
It looks like they’re back of rack heat containment and possibly Rittal cabinets. Aside from flywheel UPS this looks pretty similar to what we’ve been doing for years.
Justin TPosted April 18th, 2011
That’s E1 Dynamics on the gen/ flywheel system.
JoePosted April 18th, 2011
Impressive design geared towards energy efficiency. This is the model data center of the future, way to go Cisco!
Richard WernerPosted April 22nd, 2011
Great design and implementation. As per previous post, surprised that there isn’t a return plenum ceiling that the chimneys connect to. Return plenum would offer multiple modes of outside air economization. Nevertheless, nice work!
The entire ceiling is a return air plenum. Cold air ducts in cold aisles drop just below the back of rack chimney’s so that you get natural stratification. The system can then use a closed or open loop style depending on outside air conditions.
I don’t see a cold asile containment in the picture above. the cold air seems to be released in the wide data center area. I think if they’d closed it down, they would have achieved a lower PUE.
[...] design is closely modeled on Cisco’s new data center in Allen, Texas, which uses a slab instead of a raised floor environment, with overhead cooling and cable [...]
VictorPosted May 10th, 2012
The ducted cabinets technology was developed by Chatsworth Products, Inc.
[...] Datacenter Knowledge reported in early 2011, that Cisco Systems had opened a new datacenter in Allen, Texas showcasing a number of energy efficiency features and support for heavy-duty cloud infrastructure. [...]
Can you send details about the cooling system for rotary UPS? I’m from Brazil and I thinking use this system, but we are very doubts about the cooling system. Thanks. Rogerio.