Skip navigation
data center Alamy

Looking Back at Data Center World 2023: 3 Concerns That Stood Out

Data Center World highlighted three major challenges in the data center industry: power and heat density, site and construction, and sustainability.

There was a lot going on in Austin, Texas, last week, as data center specialists, technology vendors, and industry experts flocked to the Austin Convention Center to take part in the annual Data Center World show. Along with a wide range of IT-centric topics, programming this year also included the Data Center BUILD conference, which focused on the unique challenges of data center design and construction, and the Omdia Analyst Summit, where leading technology analysts spoke on key topics of IT industry significance.

I was privileged to host the Data Center Knowledge News Desk on May 11, and though I was spread too thin to attend many of the conference sessions, I had the honor of chatting with a number of industry experts, vendors, and analysts to get their take on the current and future state of the data center industry. Of course, that is a rather broad topic, given the evolving nature of server-based computing. Today, the term "data center" can reasonably apply to micro and edge data centers hosting from a single partial rack to several full racks, to cloud-scale and colo facilities spanning millions of square feet and consuming gigawatts of power.

Despite the variables of scale, data center environments are ultimately responsible for providing redundant power, cooling, security, and connectivity for critical IT infrastructure. In our interviews with industry experts, three major themes kept popping up that seem common across the data center community, regardless of scale: the growing power and heat density challenges, site challenges and construction efficiency, and sustainability concerns.

Power and Heat Density Challenges

This has been a persistent issue ever since I started covering data centers more than two decades ago. The raised-floor premise was dominant from the 1960s through the '90s, when mainframe components were physically larger but less dense in terms of power consumption. A full rack in the early days typically consumed less than 5kW of power, but given the growing interest in AI, a single Nvidia DGX H100 server that's well-optimized for AI is only 14" (8-RU) tall but can draw up to 10kW itself.

This is actually a fair tradeoff, given the absolutely immense computing power offered by this class of supercomputer in a box. But physics is the law, and power in = heat out at nearly a 1:1 ratio. This is true across the increasingly dense computing environment, making it reasonable to anticipate loads of 1kW per RU for the latest generation of high-performance systems but also necessitating new cooling options that can manage heat loads in excess of 40kW in a single rack.

This goes well beyond what traditional raised-floor environments can manage. And even with power-assisted air handlers at both the front and back of the rack that can manage up to ~18kW, it's likely that high-density racks will have to be isolated from the rest of the infrastructure and/or provided with auxiliary cooling.

The answer in many cases may be closed-loop, water-based cooling — either indirect, in the form of back of rack or inline cooling modules, or direct to chip for those hardware vendors that have embraced water-cooled heat sync technology.

In addition, floor space can be less of an issue for some operators due to the ongoing compression of server infrastructure, so we've heard from a number of operators who simply don't fill racks completely full, or who have isolated high-performance systems using a hot-aisle containment "island" within their existing environment, almost like a data center within a data center.

Regardless of whether the environment is raised-floor or slab-based, there is likely to be continued need for rack power and cooling in excess of 20kW, with some estimates reaching over 80kW in the foreseeable future.

Site Challenges and Construction Efficiency

While there has certainly been an increase in purpose-built data centers, there remains a relatively large number of operators who are forced to work within the constraints of existing structures. In our conversations we found that enterprise, medical, and university facility managers are often asked to locate data center technology in whatever space may be available in a legacy environment — such as 100+ year-old buildings, repurposed office space, and even parking facilities. These site-based challenges only add to the difficulties of accommodating IT infrastructure with modern requirements for power, cooling, and communications but are ultimately typical of the challenges of expanding many types of facilities in a brownfield, urban environment.

From the perspective of new construction, there has been a growing trend toward prefabrication for data center facilities. For those of us who have been around for a while, the idea of prefab brings up connotations of some of the earliest examples of mediocre construction and substandard components found in the initial forms of prefab housing.

Technology has certainly evolved since then, and prefabrication is becoming a viable and cost-effective option for purpose-built facilities such as data centers. Given the similarities of tasking found in any form of large-scale data center, the efficiency, standardization, and cost-effectiveness offered by prefab modularity have the potential to significantly shorten construction time, reduce costs, and simplify scalability over traditional construction methodologies.

While prefabrication is still relatively new in the data center industry, as adoption increases and more standardization emerges, it has potential to compete in the commercial IT market.

Sustainability

From what we've observed over the last few years, the idea of long-term sustainability has taken hold within the IT industry. Given the fact that data centers primarily consume electricity and generate heat, it's nice to find that the IT industry's focus has moved beyond the basic challenges of technological growth to at least begin thinking in terms of improving efficiency and treading lightly in a world that's starting show its age. The principles of environmental, social, and governance (ESG)-aware management are starting to bear fruit in operational cost-savings as well as increasing good will for employees and communities alike.

We hope this is only the beginning because the challenges of minimizing greenhouse gas and increasing the availability of renewable energy are both complex and continually evolving.

A more recent concern lies in the substantial consumption of fresh water for data center cooling, especially in areas that are already experiencing water stress worldwide. While evaporation-based cooling can be very energy-efficient and has been adopted by a number of hyperscale facilities, it can also be extremely wasteful of the fresh water that we've long taken for granted. We hope that there will be growing awareness of the need to protect fresh water around the world, and that water efficiency becomes a common factor in the data center industry's ESG calculations.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish