Insight and analysis on the data center space from industry thought leaders.

Retrofitting, Refurbishment, and ROI for Legacy Data Centers

As the demand for capacity rises, many data center operators are faced with difficult decisions when it comes to overhauling their legacy infrastructure.

6 Min Read
Evening cranes over a data center construction site

The constantly evolving data center environment, characterized by increasing rack densities, is pushing hundreds of thousands of square feet of unutilized white space into the neglected realms of "outdated infrastructure".

As the demand for capacity rises, many data center operators are faced with difficult decisions when it comes to their legacy infrastructure. Do they abandon aging facilities in favor of building new infrastructure from the ground up, or commit to costly retrofitting projects that bear their own share of risk?

In a world where the viability of a data center relies on numerous factors, from rack density to local power grids, deciding what to do with America’s growing desert of legacy white space is far from simple.

White Space: America’s Barren Desert

Data center computational power per square foot is expected to quintuple between 2020 and 2025, according to Gartner analyst Henrique Cecci. Across the data center industry, rack densities are rising – and with it, the requirements placed upon sites’ power and cooling.

Between 2017 and 2020, average rack densities in data centers rose from 5.6 kW to 8.4 kW, with the upper limits of rack density used to support high-performance computing (HPC) applications like generative AI, now reaching 200 kW per rack when, less than a decade ago, “extreme density” racks were considered to be more than 15 kW.

Related:Chilling Innovations: Data Center Cooling Trends for 2024

Huge spikes at the upper end, coupled with an inexorable (although slower) growth of both the mean and modal densities, are creating very real challenges for operators of older facilities and pushing them further into obsolescence.

The primary hurdles confronting traditional data center infrastructure predominantly revolve around issues related to air circulation and power management. When an insufficient amount of cold air is supplied to the data hall, and it fails to reach its intended locations or is unevenly distributed, as often occurs in older facilities, the data center becomes incapable of consistently accommodating high-density server racks.

At the same time, many sites are limited in terms of their power constraints. Legacy power infrastructure is often built using a single plug type, a single cord type, and a single voltage. This makes it impossible for a great deal of legacy white space to handle anything outside of a very specific electrical profile at the rack level.

As a consequence, there are vast expanses of traditional data center white space, amounting to millions of square feet, that have limited utility for contemporary IT infrastructure.

Related:Don't Leave Me Stranded: The Risk of Unused Data Center Assets

A Ghost Town Industry

We're already seeing the impact of these data center challenges. With the pace of demand and the speed of growth, there isn’t the necessary time to renovate, refresh, or retrofit legacy spaces, so the industry is building new facilities with modern IT loads in mind, rather than upgrading ones that already exist.

This situation cannot and will not persist indefinitely. Increasingly stringent environmental regulation, NIMBY-ism surrounding data center projects from residents and local governments, rising costs of materials, and supply chain disruptions in the construction industry, all cast a shadow over the future of the demolish, or abandon and build approach.

Much like how ghost towns emerged in areas that were once bustling with resource extraction activities, white space infrastructure is similarly evolving into an industry characterized by abandoned or underutilized facilities, resembling ghost towns.

With constantly evolving demands for power, cooling, and connectivity, data center operators – especially Tier II and Tier III operators looking to compete for business hosting workloads more complex than traditional enterprise computing – are increasingly finding themselves in possession of an unwanted “town”, as their overgrown customers scurry away to new towns.

It’s not uncommon for a Tier II operator to miss out on the opportunity to host hyperscale workloads because of inadequate physical infrastructure. This is especially unfortunate at a time when hyperscale demand is overflowing into the colocation sector, especially into Tier II and Tier III markets.

The ROI on Retrofitting

Whether or not to retrofit all comes down to understanding your potential for return on investment. Operators need to balance the cost of shutting down, gutting their data center, and rebuilding for the modern IT stack.

The financial costs and time taken need to be weighed whether it's simpler and cheaper to build a new environment from scratch. There is also an opportunity cost in the sense that legacy infrastructure can still be leased to customers running less complex IT stacks, as opposed to having no income at all during a protracted renovation.

The geography, size of the available space, and capabilities of the building itself from a performance standpoint also play a part. If the building where a data center is housed is at capacity in terms of available power, it will never be able to completely meet the demands of a modern AI deployment.

In a case like this, it makes no sense to attract a customer looking to host a modern IT stack, so why not refocus your efforts on attracting different enterprises that don't have the same constraints?

There are significant opportunity costs to bring a legacy facility in line with the requirements of a cutting-edge, buzzword application like AI, hyperscale, and hybrid cloud. At the same time, other, less demanding IT applications require data center space outside of their own facilities.

It is vital to fully understand both your market and your assets before undergoing any kind of retrofit. Don't approach a refit from a wholly application-first point of view. Don't decide that your site is going to host generative AI or HPC and then find out later that you haven't got the necessary power capacity, insulation, or access to water. Figure out what your building can do and start your design from there. If you start at the rack and try to create the necessary power and cooling infrastructure to support a very specific rack environment, you're likely to end up committing to something difficult or even impossible down the line.

Law of Diminishing Returns

Retrofitting legacy data center infrastructure will be a part of this industry’s story – especially as more stringent environmental regulations take hold over the decade – but when it comes to legacy infrastructure trying to capture demanding next-generation workloads, for the moment it remains more appealing to build something from scratch. 

Make no mistake, there will be a tipping point where it becomes harder to build new sites than it is to retrofit existing infrastructure, but it isn’t here yet. For now, there is so much capacity needed across the industry that it isn’t a question of retrofitting or building new; it's a matter of doing both. The focus, however, is on building new data centers because it's easier and faster, especially if you're aiming to support a modern application like AI.

There’s no denying that this is creating a major buildup of relatively useless data center space all over the country that isn't particularly old or useless. The market is going to need to hit the law of diminishing returns before it shifts its focus away from new builds towards retrofitting existing infrastructure – and that moment is coming. When it arrives, the industry will need to be ready.

Sam Prudhomme is President at Accelevation.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like