Simple Solutions Can Slash Cooling Costs
Dan Hyman, co-founder of Milpitas, Calif.-based Custom Mechanical Systems, has 25 years experience designing custom HVAC systems for mission critical facilities.DAN HYMAN
Cooling is the number one energy suck in the data center, and can have an enormous effect on a company’s bottom line. It can be overwhelming to find the solution that’s right for you, but whether you’re in the midst of building a new data center or retro-fitting an existing one, there are some very simple solutions for reducing cooling-related energy costs. Below are tips to help create a more energy efficient data center, no matter what stage it’s in.
If you’re building a new data center, here are tips to reducing cooling energy:
1. Design for the highest temperature and enter racks that you are comfortable with. Warmer temperatures typically allow you to run the HVAC system at more efficient operating points. It actually takes more energy to make the air 55 degrees than it does to make it 65 degrees. Just like keeping the thermostat in your house several degrees higher can save a lot on utility costs, increasing the set point of your data center can also have a dramatic effect on costs.
2. Economizers = free cooling. Air-side and water-side economizers take advantage of mild outdoor conditions to provide free cooling. Most climates have a significant number of hours where free cooling can be used. Historical weather data is readily available for most locations from sources like ASHRAE or a mechanical consultant. In its simplest form, free cooling works by bringing outside air into the data center when it is mild enough to provide cooling. For example, when it is 50 degrees outside – instead of cooling the data center air that is returned from the hot aisles, the system would simply blow the 50 degree air into the cold aisles. This would allow the compressors or chillers to turn off and saves a huge amount of energy. Other forms of free cooling include waterside for buildings with chiller plants that are refrigerant-based. Historical weather data is readily available for most locations.
3. Invest in efficiency by looking to alternative HVAC equipment manufacturers. The traditional Computer Room Air Conditioner (CRAC) unit manufacturers don’t build very efficient equipment. What has worked well in traditional computer rooms is no longer an efficient solution in a modern data center. These legacy products are typically based on inefficient fan and compressor technology that is several generations old. State of the art products are available that use half of the energy or less, and they aren’t hard to find. Look for technology specs that showcase innovation.
4. Don’t waste energy boiling water – invest in new humidification systems. The infrared and steam generating humidifiers that worked well in traditional computer rooms are a poor solution for the modern data center. These legacy systems essentially boil water in order to absorb it into the airstream. The boiling takes a lot of energy as does the cooling required to remove the heat added to the space by boiling water. Evaporative and atomizing type humidifiers use a fraction of the energy and provide a cooling effect in lieu of adding heat load to the space.
5. Invest in a CFD model and increase efficiency. A 3-D computational fluid dynamic (CFD) model can pinpoint problem areas and suggest solutions before construction starts. Most independent mechanical consultants can provide this service, as well as many consultants that specialize in creating these models. A detailed space plan is loaded into the software along with a model of the cooling system. The computer then creates a 3D model showing airflow and temperature gradients throughout the space. Changes can be made to the density, layout and air distribution to find the most effective solutions.
6. Design in flexibility to save big. A properly designed data center will provide flexibility in layout, rack densities, and temperature control. If you design with flexibility at an early stage, you’ll save major hours, energy and money in the long-term. Designing with flexibility includes the ability to locate high density areas throughout the space, as well as providing additional future capacity.
7. Do your research – look at different system types. Traditional CRAC units are now being replaced with close-coupled cooling as well as central air handling systems. This trend began several years ago as densities increased. The increased airflow required for these higher densities put the spotlight on the amount of energy used/wasted by legacy type designs. Both of these systems use substantially less energy than CRAC type units.
Here are tips for making cooling improvements in your existing data center:
1. Go 3D – invest in a CFD model. A 3-D computational fluid dynamic (CFD) model can pinpoint problem areas and suggest solutions. This model is only as good as the data that goes into it, so be prepared to do a detailed survey of your space.
2. Make it hot – raise the temperature in the room. Most servers can handle fairly high entering air temperatures. Increasing the temperature in your space allows the existing equipment to operate more efficiently.
3. Even out rack densities. Determine if densities are concentrated in specific areas. Spreading these out may alleviate the problem. If your problem is concentrated in one area of the data center, there may be abnormally high density in that area and spreading out that load could solve the problem.
4. Look at close-coupled cooling. Row-based and overhead cooling systems can be retro-fitted into an operating data center to provide additional cooling.
5. Install VFDs and pressure control on CRAC units. Variable Frequency Drives (VFDs) are used to modulate the airflow of the CRAC units. Head pressure control reduces the compressor energy significantly. These devices effectively turn down the capacity of the equipment when loads are reduced; this turndown results in saved energy. Both can be installed with minimal disruption, and any capable service contractor can retrofit these devices onto your existing equipment.
6. Help control units make peace on a global basis. Units that have stand-alone controls without any communication capability will often fight each other such that one unit is de-humidifying while the one next to it is humidifying. It is fairly easy to change this so the space conditions dictate how all units behave. Any capable service contractor can rework the control logic to insure that space conditions dictate how the units react and make sure units don’t fight each other.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
[...] Today’s “duh” comment of the day from a piece about data center cooling: [...]
Cooling optimization is definitely worth investments, and these simple tips are very useful and should be implemented, providing better results for the data center’s owners.
What should be the ideal temperature in a DataCenter.? We keep it around 68, which I think is over cooled.
Keith SchmidtPosted July 11th, 2012
Today’s “Duh!” comment was actually written by the person who bothered to call the statement “It actually takes more energy to make the air 55 degrees than it does to make it 65 degrees” a “Duh!” comment. Anyone familiar with HVAC generally or data center cooling specifically realizes he is referring to making the air 55 degrees from mean temperature without cooling.
Don’t assume you can retrofit every air conditioner with VFD’s. If you slow down the fan on a legacy CRAC that doesn’t have a scroll compressor you’ll just freeze the coil. VFD is not a “magic bullet” unless implemented with knoledge and good control.