Cisco Unveils Multi-Purpose Data Center

2 comments

The new Cisco RTP1 data center in Raleigh, North Carolina.

Many companies operate dedicated data center space for application testing and development, as well as disaster recovery facilities. But what if both functions could take place in the same space?

In an age when many data centers are optimized for specific types of compute loads, the new Cisco data center in Research Triangle Park serves as a proof-of-concept for another approach to data center design: a facility that can quickly shift from one mission to another.

From App Development to DR

Cisco’s new RTP1 data center will house 18,500 square feet of technical space where Cisco engineers will develop and test applications for both its customers products and the company’s internal IT operations. But in the event of a disaster in Dallas, where the company’s two production data centers are located, the Research Triangle facility can quickly be repurposed to provide backup and disaster recovery service.

“We think it’s really unique in the industry,” said James Cribari, a product manager for IT services aT Cisco ‘This is really the launch point for discussions about how to design and build data centers. If you have a data center that is part production and uses virtualization, this might definitely be of interest. We believe this multi-purpose use will be of great interest to our customers. There are multiple uses cases.”

The approach also offers a showcase for Cisco’s data center strategy and technology. The company’s “Data Center 3.0″ strategy focused on the power of vrtualization to make data centers more agile, and the importance of orchestration to manage virtualised assets. The company’s Unified Computing System (UCS) and Nexus switches followed, providing the hardware capabilities to rapidly shift virtual assets between servers and data centers.

Packed with UCS Gear

The Raleigh data center came online in April with 2.8 megawatts of critical power capacity and room for 438 racks of equipment. A typical rack contains 5 chassis of UCS equipment, with 40 blades in a rack. Cribari says each rack can hold 800 to 1,000 virtual machines, and run with an average power density of 10 to 12 kilowatts per rack.

The design is closely modeled on Cisco’s new data center in Allen, Texas, which uses a slab instead of a raised floor environment, with overhead cooling and cable management. The overhead cooling ducts drop air into each cold aisle, where it enters the servers and then is vented through a passive chimney system in the rear of each enclosure and into an overhead return plenum.

One difference between the two designs: the RTP1 data center uses water-side economization in its cooling side, while air-side economizers are used in Allen. The reason for the shift wasn’t geography so much as the space available for equipment, Cripari said.

The Raleigh facility is a Tier II data center, and is expected to operate with a Power Usage Effectiveness (PUE) of 1.4.

Dual Purpose Design

“Two specific types of products being developed in Raleigh,” said John Manville, Vice President of IT at Cisco. “The engineering team is working on products for customers. The traditional IT team will be developing applications to be deployed at the Allen data center to sports Cisco IT.”

Manville said the Raleigh site can be quickly shifted from those duties to provide disaster recovery for the Dallas-area facilities. The swift changeover is accomplished using Service Profiles in UCS, which allows administrators to quickly apply pre-designed templates to virtual servers. Admins can develop multiple service profiles, and shift between them as needed.

The Raleigh RTP1 data center has earned Gold-level certification under the LEED (Leadership in Energy and Environmental Design) program for energy efficient buildings.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

2 Comments