Green Revolution’s Immersion Cooling in Action
April 12th, 2011 By: Rich Miller
High-density cooling specialist Green Revolution Cooling has published photos and video of several installations of its product, which submerges servers in a liquid similar to mineral oil. The Austin, Texas startup said its cooling enclosures can eliminate the need for CRAC units and chillers, allowing users to cool high-density servers at a fraction of the cost of traditional racks.
Green Revolution’s CarnotJet Submersion Cooling System resembles a rack tipped over on its back, filled with 250 gallons of dielectric fluid, with servers inserted vertically into slots in the enclosure. Fluid temperature is maintained by a pump with a heat exchanger, which can be connected to a standard commercial evaporative cooling tower. The company says its solutions will work with OEM servers with slight modifications (removing unneeded fans, applying a coating to hard drives).
Liquid cooling is used primarily in high-performance computing (HPC) and other applications requiring high density deployments that are difficult to manage with air cooling. Interest in liquid cooling has been on the rise as a growing number of applications and services are requiring high-density configurations.
Green Revolution’s first unit was installed at the Texas Advanced Computing Center in Austin, home to the Ranger supercomputer. This video features a four rack (100 kW) installation at Midas Networks, an ISP in Austin:
Green Revolution says its enclosures represent a 50 percent savings in overall energy costs for the workloads at Midas Networks. The company says the payback on the initial investment in the liquid cooling system ranges from one to three years.
Mineral oil has been used in immersion cooling because it is not hazardous, transfers heat almost as well as water but doesn’t conduct an electric charge. Green Revolution is among a number of companies introducing liquid cooling solutions that immerse servers in fluid. An immersion system with a different design was introduced by UK firm Iceotope at the SC09 event, while Hardcore Computing introduced its Liquid Blade immersion cooling unit last year.
A photo gallery on the Green Revolution web site shows other early installations, in which the enclosures sit atop a containment system to serve as a backstop against leaks.
“The containment here is a 3 inch metal wall, made of angle iron, surrounding the tanks and pumping module and sealed to the concrete slab below,” said Mark Tlapak of Green Revolution. “The area holds significantly more than one rack. In between the tanks we place expanded metal catwalk that sits 3 inches high to allow people to walk around the racks even if the containment area contains coolant. Each tank has two coolant level detection sensors that tie into the control software and send out instant alerts in the event of a change in coolant level.”
Here’s another look at the Midas Networks installation:
[...] to save money on data center cooling? Tip your racks on their side, fill them with mineral oil, and submerge your servers. Austin startup Green Revoluton Cooling first profiled here) has a video demo of its immersion [...]
WJPosted April 13th, 2011
This is all based on a false premise.
There is still heat H to be removed from the server.
The oil speeds up the conductivity between electronics and medium
BUT the heat still must be removed and dumped.
This is not truly more efficient and certainly not “greener” than using air as the medium.
This will make servicing the equipment more difficult and possibly shorten the life of the equipment as well as requiring more complex coolant systems thus making this system less efficient and “green”.
neiliusPosted April 13th, 2011
@WJ – while there is still the same amount of heat to be removed a more efficient system, such as this, will use less power to remove that heat. As to how it affects the life of the system I have no idea.
FuujuhiPosted April 13th, 2011
Less efficient, less green?
Let me think… you have a liquid with lots of heat you need to dissipate… hmmm, what could we do with it? throw it away in the air?!? Surely you jest.
Green requires innovation. Clearly if we are so shortsighted, we will never succeed the green revolution. Funny how people just sometimes spend more energy fighting an idea instead of looking at how they could improve it.
This is a very interesting concept and I agree with WJ – what do you do with this mineral oil that is left over or needs to be dumped? Can it be recycled? What if this stuff starts leaking?
I know something has to be done with 1/5 of the world’s electricity being used by data centers. Virtualization, with OpenVZ and Xen to make everything into virtual private servers plus other systems could achieve using less physical servers, but what other options do we have?
AJPosted April 13th, 2011
I believe the savings would be in the amount spent on cooling, not on the amount spent powering the servers. It is definitely possible to achieve better efficiency cooling servers over blowing cold air at them.
The claim that seems dubious to me is 50% savings. The energy cost of cooling should be some fraction of the energy being consumed by the rack which is radiated as heat. If the only savings are in cooling, this is an impossible figure unless they were spending more watts on cooling than they were on the servers themselves (which is virtually impossible since refrigeration is not close to that inefficient). I could see perhaps a 50% savings on cooling costs.
I’m also curious how the drives work. I don’t believe spinning hard drives are submersible as they are not air-tight. I suppose they could be using all solid state drives.
AJPosted April 13th, 2011
The mineral oil is not consumed in the process. Like water based cooling, the oil is cooled and cycled through the system continuously.
The mineral oil generally just transfers heat between the servers and the cooler, so it isn’t used up. It is denser than air, which is why it is more efficient for this. Wikipedia says it is FDA approved as a food additive, so even in the case where it needs to be disposed of, it doesn’t seem very dangerous. Seems like a nice improvement!
CliffPosted April 13th, 2011
@WJ The system uses the oil much like a water cooled system uses water. The hot oil is run through a heat exchanger that requires considerably less energy and you are only cooling the systems, not the entire data center room (working ambient temp of -40c to +100c)
@AJ The hard drives must be encapsulated at GRCs facility prior to installation in the system unless they are SSDs.
The FAQ page on GRCs website answers many questions:
AGPosted April 13th, 2011
Mark T.Posted April 13th, 2011
Fair disclosure/disclaimer: I work for this company.
Yes, heat must be removed, as is shown in the video, but the premise is that it is done much more efficiently. You can save 50% of total power if you first reduce cooling by 95%, and say reduce your PUE from 1.7-1.8 (average US figures) to 1.1 range (assuming conversion losses still add some power). Second, if you don’t have server fans and save 15 or 20%, then your actual power use is much less than an air-cooled 1.1 PUE. Add both the cooling and server fan savings and you get 45-50%. Less power can also drive smaller generators, etc.
How do you get power savings: first – your coolant can run at a much higher temp and still be very effective. Like jumping in a swimming pool vs. standing in air, liquid is much more effective at quickly transferring heat, and a 40 F swimming pool will kill you quickly while a 40 F outside air environment will just be unpleasant, for a while. Therefore, the coolant temp can be higher, say 40 C, yet keep CPUs cooler, say 60 C under full load (vs. 75 C under full load in air).
So, high temp coolants are easy to cool because heat naturally flows from hot to cold so 40 C (105 F) coolant is very easy to cool in practically all climates. You could even run the coolant hotter and still keep components cooler than normal.
Second, pumping small quantities of liquid is much more efficient than large quantities of air. Our cooling savings are what our actual third party installs are experiencing. Measuring PUE without power losses, they are getting from 1.02 to 1.06 year round (e.g. in June in Texas, not just December in cooler climates), and also getting server fan savings up to 30%.
The coolant doesn’t need to be replaced and is very stable. You need very high temps to cook mineral oil, and you don’t have near that here.
AJ is right – the platter-style hard drives need to be sealed.
As far as life of systems, to date we’ve experienced fewer failures than you’d expect over the 2+ years we’ve had servers in our solution. Items cause failure in servers include fans (gone), bad connections due to oxidation (the reason you can take apart a laptop and put it back together) which is mostly reduced, localized heating (reduced), and of course hard drives or manufacturers defects which we can’t do anything about.
[...] Knowledge just posted this story about Green Revolution’s submersive server cooling [...]
dibbzPosted April 13th, 2011
Will my Oracle m9k be supported?
ChrisPosted April 13th, 2011
@WJ: Yes, the heat generated by the server in BTUs or Watts is the same. The increased in efficiency come from reducing the power needed to chill the cooling medium. Air neither transfers well nor does it have a high heat capacity in comparison to mineral oil. The result is that for a cooling system using air you need an active air conditioning unit to chill the cooling medium after its been warmed by the servers. If you change the medium to mineral oil which both has a higher heat capacity and a higher heat transfer rate you can replace air conditioner with a passive radiator. The saving that increases efficiency is the energy you used to run the air conditioner.
EddiePosted April 13th, 2011
@VPSLIST, I think that virtualisation will allow us to maximise the use of most of the cycles on computers (can someone chime in with an efficiency figure for running multiple virtualisation sessions on one system?). The problem that people are facing is that you need to spend cycles to get computing, and as the base load of computing increases, so will the number of cycles required, and hence the power required.
However, when thinking about this, also factor in thepower required to route the traffic to and from the virtual servers.
Ed OsloPosted April 13th, 2011
WJ, you don’t read much, do you? The liquid design is specifically for HIGH DENSITY servers where it is impractical to install lots of fans. If you traded all of the fans required to cool a high density rack with one liquid pump to a radiator, you WILL save money, therefore te green part
jasPosted April 13th, 2011
Hah@ wikipedia says
You know anybody can add junk to that thing,right? Don’t use it as a reference
Mark TPosted April 13th, 2011
Disclaimer: I work for company
Regarding fluid life and recyclable issues: the coolant last the life of the data center or longer. It is filtered but that is it. So, 10, 20, 30 years… No real life issues. It can be recycled, but again if you’re not replacing the coolant, then recycling is less of an issue.
As far as evaporation/refill, which no one asked but people tend to, the evaporation rate is imperceptible, and over 2 years we haven’t replaced a drop. That said, if you did, the cost of the coolant is under $10/gallon, perhaps well under, so not a big deal either way.
[...] Data Center Knowledge: High-density cooling specialist Green Revolution Cooling has published photos and video of [...]
ToddPosted April 13th, 2011
@Chris: The heat generated by the server is LESS since you don’t have the extra fan energy. You’ll save a _minimum_ of 8% of the server energy; 12% seems typical. I haven’t seen any servers at 30%, but maybe there are some.
LeePosted April 13th, 2011
I love how he calls it a cooling SOLUTION, when the servers are sat in solution
PabloPosted April 13th, 2011
Old mainframe computers were liquid cooled (ibm 3090).
It’s not new.
Even with car engines the problem is the same.
¿Remember the air cooled VW Beetle?
With more heat to dissipate air is not enough.
BillPosted April 13th, 2011
Not sure why the naysayers are picking this apart, as it makes perfect sense. Heat transfer through liquid is several orders of magnitude higher than air – which is an INSULATOR (think fiberglass, foam etc) of all things.
So, in order to remove heat via air, you need AC units which of course performs cooling using compressors. Meanwhile, the liquid system is moved around using a simple pump which is a far smaller load. In addition, trying to route the air around requires fans (as mentioned) which draw power as well.
By eliminating the fans, eliminating the compressors, and using a far more efficient method of heat transfer the power required to run what is admittedly an inefficient cooling system is greatly reduced.
Then, there is also the increased reliability of a simpler system, fewer parts to fail (no fans which are prone to failure), and the simple backup of having a larger effective heatsink should even a pump fail. Trying to bring oil from room temp (30C) to 65-70C (where things start to fail) will take a significant amount of time, during which of course you can replace the pump before things become a problem.
Nifty idea, I’ve seen other DIY liquid submersion systems, but on a large scale like this it certainly makes more sense.
Simon T.Posted April 13th, 2011
This is an obvious solution to deal with any high density heat sources. We used freon (before it was banned) to do the same thing. We later switched to mineral oil. We were cooling high power electro-magnets in limited space. Water was just too corrosive.. It does take a different mindset and serous engineering to convert to liquid cooling rather then the traditional air cooling.
stonewallPosted April 13th, 2011
I am not a computer geek. I am an HVAC/R tech with over 20 years experience. Power consumption and heat transfer is always less if you can use a medium that has better transfer properties. Air is very inefficient. I wonder about the evaporator tower concept. Seems it would be better to treat this as a geothermal set up and take the heat from the “oil” transfer it to glycol via a coil in coil heat exchanger and then use the glycol to dissipate the heat into the ground. At that point you would only need to run two pumps to circulate the “oil” and the glycol.
jeffgPosted April 13th, 2011
Say goodbye to any warranty on the submersed equipment and double the duration of your maintenance windows. On the flip side, your friends will admire you for your ultra-moisturized hands.
GregPosted April 13th, 2011
Any potential reliability issues aside, the biggest issue I see is really just the increase in footprint. Now a rack that would normally take approx 2′ x 4′ will now take 3′ x 8′ (a guess).
If floor space isn’t a problem, this definitely seems like a step in the right direction towards where we need to be.
As others have mentioned, liquid cooling is far from new and has been used on computers for decades. It just hasn’t been as public as it is lately.
PhilipPosted April 13th, 2011
What is the advantage of immersion over standard water cooling solutions with pipes going to just the parts that can’t be passively cooled? There’s a clear advantage to not dunking your hardware in oil that prevents it from being serviced or resold.
Deja...Posted April 13th, 2011
Been there. Done that. Anybody else remember fluourinert?
Dave APosted April 13th, 2011
Cray supercomputers have been liquid cooled for a long time.
Chris TeltingPosted April 13th, 2011
I think it’s a great idea. Even better efficiency if you add a redundant combined power supply and use DC to DC for all computers.
But I have to wonder about defective components like rupturing capacitors. Not sure of the materials they used but if it’s conductive I have to wonder if the small amount of contamination in the medium (mineral oil) could short circuit adjacent blades in the system.
Mark TPosted April 13th, 2011
Disclaimer: I work for the company
Answers to recent questions:
1) We have seen 30% – one of the largest OEM’s cloud offering using Westmere CPUs has 20-30%, a 2U, 4 board, 8 socket server. Any of the largest data centers that have tried the latest cloud offering will say that one is super loud.
2) The coolant is not electrically conductive. Its dielectric strength is 6x that of air
3) Warranty: we’ve partnered with one of the largest 3rd party warranty companies to offer warranty on all devices in our systems
4) Density issues – first, you have no hot aisle, and second you can fill your rack to high density, so I believe you’ll see that you won’t lose space
5) Advantage vs. water pipes: first, you submerge all components, not just say the CPU. Water pipes aren’t put on all components. Second, water pipes are expensive. Third, typically water pipes aren’t thermally coupled so air is still the primary heat transfer medium, which means there are still big inefficiencies.
MikePosted April 13th, 2011
Any chance you will be offering this to high-end desktop (gaming machine et al) builders? The only way I could see a submerged machine being beat in overclocking capability would be to use cryogenics.
And while I don’t remember Fluorinert other than the wikipedia article, I was a teenager during that time and remember the scene from “The Abyss” where the rat was breathing it. Still so awesome to this day…and still not usable for that IRL with humans.
stonewallPosted April 13th, 2011
Problem with freon is if it is exposed to an electrical spark it starts to form an acid. Pretty much phosgene gas, almost the same as mustard gas used in WWI. That is usually what causes compressor failure in a HVAC/R compressor, a spark which causes acid, which causes more sparks until the windings short out.
Submerging the entire server blade into a liquid that does not conduct electricity sounds much better than piping water to a board full of live electrical circuits. Pipes crack, bend, bust, break, leak and all kinds of havoc would start when water started flowing.
BrunoPosted April 14th, 2011
But large data centers using this system ought to be kept near places needing some sort of central heating and the heat generated in the racks recycled right there. Central heating for free… well, almost!
The oil is Polyalkylene Fluid, commonly known as PAG fluids. It so happens that I managed some research into the stability of PAG Fluids during the 1980′s. For example, setting up a 1Kg test piece that was heated to 1,000 degrees and quenched in a bath of PAG Fluid, which in turn showed no deterioration over hundreds of cycles. PAG Fluids are used for many different things including lipstick. Yes! Lipstick. You see it is derived directly from Methane and with no additives, has thus no other impurities. Can be washed off you hands, so as a working environment, I can see many advantages.
stonewallPosted April 14th, 2011
@Chris Cole, is this the same PAG we use for oil in some refrigeration systems? Highly hygroscopic?
RickPosted April 16th, 2011
Anyone else think WJ may sell traditional data center cooling? A high school physics class includes the basic knowledge about the efficiency of liquid vs air cooling. VERY quick research into the conductivity (or lack thereof) of this liquid negates the rest of the argument. Hopefully other naysayers will do more research before they protest, and save others the time required to educate them, as well as save themselves some embarrassment along the way.
DavidHPosted April 23rd, 2011
How does the heat get out of the computer room?
Our server room is just one big room in a commercial office tower.
Would we have some form of heat exchanger with the building chilled water supply?
@DavidH: the mineral oil is piped out of the computer room to a heat exchanger with water on the other side.
@Greg: although the rack footprint is larger, you don’t need a hot aisle or room for traditional CRAC units. The space is generally a wash.
@jeffg: GRcooling offers third party warranty support. See their website.
[...] were happy to see our video featured on Data Center Knowledge yesterday, where Rich Miller continues to provide great content focused on the data center [...]
[...] Green Revolution’s Immersion Cooling in Action « Data Center Knowledge. Share [...]
RomanPosted May 11th, 2011
Your argument is based on a false pretense, water / liquid based coolant is more energy efficient than air based coolant. The same amount of heat has to be removed how you remove the heat is the innovation.
The advantages of this system that I can see, (in buildings with cooling towers) you can tie this into your cooling tower / heat exchangers, basically if you really wanted to be “green” the only thing you would need to run is pump to circulate the oil between the exchangers and in racks….
There was an article a year ago about ionized air being around 30% more efficient at removing heat vs regular air. I wonder if anyone has looked into doing that as well… Id think it would be way easier than submerging equipment in liquid but as always there will be cons.
DanielPosted July 21st, 2011
Take a look at your current data center. How many thousands of square feet is it? Add in the height to get cubic feet. Then you add in all these heat sources. These calculations tell you how much BTUs are needed to cool that space. Then you have to factor in insulation in the walls and how to vent the heat of the machines doing the cooling. ALL that cubic footage of air has to be moved fairly rapidly through the cooling system. If you run a datacenter you know the expense of all this equipment AND the electricity to run it. If you have little or no experience then, trust me, it’s an extremely significant part of the budget.
But how much of that cubic footage is a heart source? That’s the only place that really needs attention, right? The rest of the room’s requirements, sans the heat sources, can be easily met with more common (and less power-using) AC units already in use elsewhere in the building.
If you immerse the heat sources you are now looking at moving liquid so that requires power but you are also taking about no more thousands and thousands of fans (noise pollution to boot). From there the heated liquid travels to any of a number of schemes from extracting the heat from the liquid before returning it for another cycle.
Extracted heat can either be dissipated into the air (preferably outside the building depending on the temperature differential) or it can be captured. Captured heat = stored energy.
The potential savings at first glance are the lack of need for huge, expensive and energy-gobbling AC systems and possibly an energy source that can be harnessed using some other method. No more fans means no more noise in the datacenter; everyone hold your hands up who would like to be able to converse in a normal voice in a datacenter. No green savings or anything like that but at least you can relax and hear and talk without that high-pitched whistle that will damage your hearing over time.
In other words, separate the methods used to extract from the two different heat sources; humans and their relatively modest energy users (personal computer, cellphone, the human body) and the more intense heat source (small, extremely localized and very hot) and you can use more efficient methods to keep them both cool.
BrettPosted October 31st, 2011
Space wise you can pack your racks with a much higher density product. Not always possible with traditional hot/cold isle arrangements. Floor space can be reduced or at least made neutral in this way. As also noted racks can be placed closer together as a) no hot/clod isle requirement and b) equipment is accesed vertically so dont need to be rackdepth + working space apart
The space required is also reduced by the huge volume of plant required by Chillers, ducting, heat exchanges etc for traditional cooling is also vastly reduced.
I’m really interested in seening these type of solutions being mor widely adopted
Keith U.Posted November 14th, 2011
The above concept is well understood. However the real advance will be transferring that heat through a closed exchange with a nearby cold river or lake. I have lived in a very cold location in Northern Canada. One province alone has over 100,000 lakes with endless rivers. Along some of those rivers are very large hydroelectric stations. Therefore one has near infinite year around cold water and the cheapest electricity in North America, all in the same place. Not to mention very advanced fiber.
Cheap local clean power, endless flowing cold water, nearby fiber. Anyone interested in investing ?
[...] years we’ve featured liquid cooling technology for the data center from companies including Green Revolution, Clustered Systems, Hardcore Computer, Iceotope and Coolcentric [...]
[...] were happy to see our video featured on Data Center Knowledge yesterday, where Rich Miller continues to provide great content focused on the data center [...]