The explosion of generative AI and machine learning (ML) into the public consciousness has brought about new focus on the capabilities of these promising technologies. Advances in the development of large language models have made AI technology more accessible to the general public through chatbots like ChatGPT and image generators like DALL-E 3. But consumer technologies are only scratching the surface of the potential of AI – these technologies are being leveraged by businesses to support supply chain management, financial analysis, marketing, search, image generation, and more.
The AI industry is expected to grow significantly in the coming decade and reach nearly $2 trillion by 2030. As technology continues to improve and governments grow more comfortable with its implementation, industries like healthcare, eMobility, energy generation, and power utilities will scale their use of AI technology to drive more streamlined business practices and better outcomes for their customers.
New Technology, New Data Center Demands
Customers may be used to the streamlined interfaces of AI and ML applications, but data center managers know the extreme amount of data that must be processed behind the scenes to make these experiences possible. This requires high-performance chips on the cutting edge of IT development.
The powerful chips that enable AI require precise power management and, importantly, cooling. The heat given off by advanced applications requires data center managers to adapt to high heat loads while maintaining the ability to scale operations to meet demand. To further complicate matters, increasing physical footprint may not always be an option – data center managers and engineers often need to solve the technical challenge of fitting more computing power into the same spaces. Additionally, customers from all verticals will always require 24/7 uptime, so the demands of AI applications often need to be met without completely remodeling or restructuring data center architecture.
Cooling Approach Must Shift
For installations that expect to support AI infrastructure and next-generation high-performance chips, traditional cooling approaches will not be enough. Data centers that try to manage increasing heat loads with high-velocity air cooling will quickly become wind tunnel-like environments that are difficult to work in and expensive to operate. Additionally, when air cooling systems work overtime to maintain necessary operating temperatures, it puts facilities at risk of equipment failures, unplanned downtime, and high energy costs. Liquid cooling offers a better solution for many data centers.
Whether it’s a complete liquid cooling solution or a hybrid solution, bringing liquid cooling into data center architecture can drive better performance while saving energy. However, for data centers that are being designed or remodeled to the most high-tech applications, liquid and direct-to-chip cooling is often the only possible option.
Liquid cooling systems can help data centers increase capacity while maintaining efficient space and energy use. They can also lower the total cost of ownership for data center facilities. Liquid cooling systems provide an effective solution for achieving the required temperature parameters of next-generation technology because liquid offers a much greater heat transfer capacity than air. This increases power usage effectiveness – the measurement of how effective data centers are at using facility power for computing rather than auxiliary systems.
Solutions at Scale
There are options for data centers that cannot implement completely liquid-cooled architecture. Data centers can cool a single rack or a small set of racks where AI and machine learning applications are housed. This means they don’t need to deploy full-scale data halls that are liquid-cooled.
When implementing these spot solutions, data center managers need to understand future business plans. Using dedicated cooling solutions to solve a unique problem is a feasible approach, but due to cost, energy efficiency and other factors, a solution for one problem may not be the solution for another. As all data center managers understand, different challenges require different solutions, and a one-size-fits-all approach rarely succeeds. This may mean planning next-generation data centers to be fully liquid-cooled or exploring hybrid liquid-to-air solutions that bring liquid cooling to the rack and chip level while operating within air-cooled infrastructure.
Additionally, many data centers are preparing server racks for next-generation cooling by installing server racks with manifolds and the additional pipework considerations necessary for liquid cooling. This allows data centers to ease the transition to liquid cooling when it arrives because their rack-level infrastructure is already compatible with facility liquid.
The biggest advantage that planning for the future and understanding IT workloads will bring is the realization that almost all potential cooling solutions can be built out in combinations, allowing data center managers to match their power and cooling capabilities with shifting demands. The key to sustainable growth is a variety of flexible options for supporting the next-generation equipment. Liquid cooling technologies help drive that flexibility.
Other Infrastructure Considerations
Outside of cooling, there are other pieces of data center infrastructure that are important for the deployment of AI and ML technologies. For instance, the remote monitoring and control capabilities of smart power distribution units (PDUs) can increase energy efficiency while reducing the risk of downtime.
Leak detection is also important. At the facility level, there are plenty of ways liquid can potentially make its way into data centers. Facility water pipes, if not properly protected, can freeze and burst. Backup generators can leak fuel. In some cases, liquid cooling lines can be damaged. Leak detection technology helps data center managers remotely pinpoint the exact source of leaks and shut down equipment to prevent damage. This remote monitoring and control of equipment is critical for these kinds of emergency situations as well as to keep an eye on the day-to-day efficiency and smooth operation of a data center.
The proliferation of AI, ML, and high-performance computing is already bringing many new challenges for data center managers, but with the right supporting solutions and systems in place, it will also bring exciting opportunities. With thoughtfully designed cooling and power technologies, data center managers and consumers alike can benefit from this exciting technology.
Marc Caiola is the Senior Director of Global Data Solutions at nVent. Marc has over 30 years of technology and business leadership experience in defense and ICT data and communications Industries.