A view of the overhead cooling ducts at the new Cisco DC2 data center north of Dallas, which drops air into the cold aisle. Cisco estimates that the facility will eb able to make use of air economizers about 50 percent of the year.
In mid-2009, local media in Dallas reported that Cisco Systems (CSCO) had selected a site north of Dallas for a $180 million data center project 
. The company hasn’t had much to say about the new facility – until now. Yesterday Cisco launched Data Center 2011 Texas 
, an interactive tour of the new data center, which features the latest refinements to Cisco’s design.
The new DC2 data center is about 15 miles north of the Cisco DC1 data center in Richardson, Texas. The two facilities will serve as “active-active” mirrors of one another, providing instant failover in the case of an outage at either location. When data is updated at one of the data centers, the changes are instantly synchronized at the other facility. Cisco has leased 10,000 square feet of colocation space in the area to test its active-active operations and synching as it continues building the DC2 facility, which is expected to come online next year.
The new data center will use Cisco’s Unified Computing System 
, which will support the consolidation of other data centers into the Dallas facilities, and also power Cisco’s in-house cloud computing system.
The DC2 facility will be the first Cisco data center to use fresh air to cool its servers (air economizers). The facility will be able to use economizers when the temperature is below 60 degrees, and Cisco estimates that it will be able to operate without chillers for about 50 percent of the year (primarily at night, given temperatures in Texas). The company has also raised its server inlet temperature from 65 degrees to 78 degrees, allowing substantial savings.
Using Overhead Cable Management – for Now
Cisco also opted to forego a raised floor environment and use overhead cooling and cable management. Interestingly, Cisco says that the company has since reassessed the merits of using overhead cable distribution, and will likely revise future designs to include a more limited raised-floor (perhaps 12 inches instead of 36 inches) to accommodate some cabling.
The overhead cooling ducts drop air into each cold aisle, where it enters the servers and then is vented through a passive chimney system in the rear of each enclosure and into an overhead return plenum. That’s a change from the design in Richardson, which uses a 36-inch raised floor.
Solar Power Will Support Office Space
The DC2 facility is supported by two 10-megawatt power feeds, and will add two more 10-megawatt feeds for phase 2, providing redundancy and total capacity of more than 20 megawatts. Cisco expects racks in the data center to have an average load of about 8 kilowatts per rack. The new facility features a rooftop array of solar panels that will generate up to 100 kilowatts of solar energy – enough to power the office areas of the data center.
For more details, check out the Data Center 2011 Texas 
feature ont he Cisco web site. You can compare it with a similar interactive feature about the design and construction of the company’s Richardson data center 
A graphic of what the Cisco Texas DC2 data center will look like upon completion.
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2010/10/19/inside-ciscos-new-texas-data-center/
URLs in this post:
 $180 million data center project: http://www.datacenterknowledge.com/archives/2009/06/10/cisco-plans-184m-dallas-data-center/
 Data Center 2011 Texas: http://www.cisco.com/go/dc2011
 Unified Computing System: http://www.datacenterknowledge.com/archives/2009/03/16/cisco-unified-computing-is-an-inflection-point/
 Richardson data center: http://www.datacenterknowledge.com http://www.datacenterknowledge.com/archives/2009/10/27/interactive-tour-ciscos-flagship-data-center/
 Rich Miller: http://www.datacenterknowledge.com/archives/author/richm/
Click here to print.