Number of U.S. Government IT Facilities Rises to 7,000
July 25th, 2013 By: Jason Verge
The U.S. government wants its servers to come out of the closets. But first it has to figure out exactly how many server closets it has, and the number continues to be a moving target.
As it seeks to consolidate its IT infrastructure across dozens of agencies, the government has had trouble sorting out how many data centers it has, and even more trouble adding up all the server closets, which includes rooms smaller than 100 square feet.
As a result, the number of IT facilities included in the Federal Data Center Consolidation Initiative (FDCCI) continues to grow. The number, which started at 432 in 1999, grew to 3,000 last year and has now exploded to nearly 7,000.
It should come as no surprise that The Office of Management and Budget (OMB) has to explain this to a House Oversight and Government and Reform subcommittee this week. Part of the explanation is defining what to count. Several years into the consoldiation process, the FDCCI was integrated with the new PortfolioStat, meaning that sub-500 square foot data centers (or server closets) would now be counted. More than 70 percent of Federal “data centers” are actually server closets, according to testimony from the General Services Administration’s David McClure.
Now Over 7,000 Data Centers
So how many “data centers” are there? The most recent number, given during a joint House-Senate briefing was 7,000. This is up from a recent Government Accountability Office finding, which pegged that 22 of 24 agencies which are part of OMB-led FDCCI now house 6,800 data centers. This is more than double of the last estimate of 3,133. The data centers are so sprawling that it’s hard to get a figure, as DCK noted late 2012.
“Initially, OMB required agencies to report only data centers that were greater than 500 square feet in size and that met one of the tier data center classifications defined by the Uptime Institute,” said the General Services Administration’s David McClure in Congressional testimony. “Based on that definition, the first data center inventory, reported in October 2010, identified the data center asset baseline as 2,094 data centers.”
In a sense, the number has been growing along with the government’s ambitions. Server closets in office buildings allow agency staff to keep their IT assets nearby, but are typically less efficient than data centers. A key goal of the consolidation effort is to shift equipment from legacy facilities into data centers with energy efficient designs. With a big chunk of the government’s $82 billion in IT spending living in small inefficient rooms, the opportunity for savings is immense.
New Focus on Optimization
Federal agencies have closed 484 data centers as of May 2013, up from 381 closures last time DCK checked in, last November, 2012. There’s a total of 855 planned closures by the end of FY 2013, according to Federal CIO Steven Van Roekel.
The Government Accountability Office (GAO) says the consolidation effort must provide metrics beyond facility closures. “OMB had not tracked and reported on other key performance measures, such as progress against the initiative’s cost savings goal of $3 billion by the end of 2015,” according to David Powner, Director of Information Technology Issues at the GAO.
VanRoekel agrees that these key performance measures are imperative to FDCCI’s success. “In the initial stages of the effort, it was necessary to focus on data center counts and physical closures,” said Van Roekel. “Today, we are looking at new incentives are focused on a more outcome-based approach, to improve the overall efficiency and effectiveness of data center operations to optimize total cost of ownership.”
The suggested next step is to identify data centers are being as core and non-core, according to the GSA’s McClure said. The core data centers will serve as consolidation points, thanks to their economies of scale. Agencies are encouraged to concentrate on optimizing their data centers across total cost of ownership metrics, while striving to reach an overarching goal of closing 40 percent of facilities.
In conjunction with the FDCCI Task Force, the GSA developed a tool to help agencies identify and select their core data centers. It defines nine draft criteria that are key attributes for core data centers:
- Power usage effectiveness (PUE) must be lower than 3.0
- Data center must be metered for use of electricity
- Agency must have sufficient information to calculate a cost of operating system per hour (COSH) score
- Virtualization must be at least 40% – Virtualization is defined as a technology that allows multiple, software-based machines, with different operating systems, to run in isolation, side-by-side, on the same physical machine.
- There must be at least a ratio of 10 servers per full time equivalent(FTE)
- Power capacity must be at least 30 watts per square foot
- Facility utilization must be between 20% and 80% of the data center space
- Data center must meet at least the Tier One standards defined by the Uptime Institute
- Data center must be agency owned, leased or in the cloud
The GSA has also developed a Total Cost of Ownership (TCO) model.
The bottom line is that the number keeps changing because the definition keeps changing and they keep finding more sprawl. The FDCCI, going forward, will focus more on key performance measures, total cost of ownership, and identifying core data centers to spearhead consolidation efforts.
Sure, you can shut down server closets and move things into remaining data centers, but this is only a *partial win*.
As I see it, the real problem is that most Data Center Consolidation efforts start too low in IT service supply chain. As a result, there is:
* A lot of waste (energy and raw utilization) in over-provisioning;
* Risk associated with aging and less rigorously managed equipment;
* Higher financial impact due to:
1. How the assets are managed through their lifecycle;
2. How purchasing/sourcing/acquisition may impaired in negotiating better bulk rates for assets/infrastructure;
3. How difficult it becomes to capture and associate service assets with the supported services to determine what it really costs to deliver a service.
And I could make that list longer, but I think my point is made.
To be successful, we need to start looking at this challenge from a new perspective that provides an adequate context — one that is closer to the customer, differentiates between the customer and service provider perspectives, and enables effective decision making about service and infrastructure rationalization.
Starting at the equipment and working up is the hard, time consuming and costly way of doing it. There are better ways and they aren’t that expensive, except they require that people think differently.
Ask for help and ideas — you might be surprised what you get back!
Bob DeutschePosted July 26th, 2013
When I read this, I was reminded of a discussion I had with the CIO of a state in the Pacific Northwest.
State had just gone through a data center consolidation…that being a good thing. He commented though that in reality, all he had really done was to take what had been 12 separate facilities, and stuffed them into one. Other down side was that the new data center was literally sitting at the end of a runway (honestly). Kind of gave new meaning to disaster preparedness perhaps…we smiled about that too.
Bottom line is that data center consolidation has physical aspects as well as logical aspects. Physical consolidation is generally a piece of cake, logical consolidation not so much. Logical consolidation hurts even more than physical consolidation in terms of time, cost, jobs and effort. I do not see Federal government ever doing this I’m afraid…simply too political.