Skip navigation
Data center art

Comparing the Top Data Center Liquid Cooling Designs

Rear-door heat exchangers, direct-to-chip, induction cooling, immersion – here’s how they differ from each other

As compute density rises (however unevenly), one big question data center managers face is whether commodity CPUs, chipsets – and especially accelerator chips (your GPUs or FPGAs) – will drive heat density so far as to make liquid cooling as mandatory in the enterprise as it is in high-performance computing. Should enterprises start making their investment plans now?

That liquid is a better heat-transfer medium than air is indisputable. HPC centers worldwide depend on it today for applications where even forced-air cooling wouldn’t cut it for more than an hour.  In many of these systems, chilled water is pumped into cold plates, which make direct contact with processors.

The everyday enterprise has been dipping its toes into the shallow end of the liquid cooling pool since around 2000, some buying racks with rear-door heat exchangers (RDHx). Thirteen years ago, as part of a contest, IBM engineers came up with a rear-door design that mounted onto existing racks, didn’t need fans, and leveraged a facility’s existing chilled-water plant. IBM was finally granted a patent on the design in 2010, and most passive exchanger components today appear to be derivatives of that model...

To read the rest of this article, please fill out the form below:

TAGS: Design
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish