Skip navigation

Giving Attention to Data Center Cold Spots

Now, cold spots have become the new challenge, and opportunity, for data center operations. A cold spot is any IT intake temperature less than the established minimum, writes Lars Strong, Upsite Technologies. He outlines tips for dealing with cold spots in the data center.

Lars Strong, senior engineer, thought leader and recognized expert on Data Center Optimization, leads Upsite Technologies' EnergyLok Cooling Science Services, which originated in 2001 to optimize data center operations. He is a certified US Department of Energy Data Centre Energy Practitioner (DCEP) HVAC Specialist.

Lars-Strong-HeadshotLARS STRONG
Upsite Technologies

Data centers exist to provide continuous power, connectivity and proper intake air temperatures to IT equipment. Hot spots are a well-known difficulty that occurs when IT intake temperatures are above the recommended maximum.

Now, cold spots have become the new challenge, and opportunity, for data center operations. A cold spot is any IT intake temperature less than the established minimum. The minimum is established by data center personnel, or when ASHRAE recommended guidelines are followed, at 64.4 degrees F.

Historically, computer rooms have been kept overly cold. There are a number of contributing factors:

  • Mainframes which primarily occupied data centers had less emphasis on front and back, and less airflow pattern or orientation that needed alignment with the room, so entire room was cool to compensate.
  • Power was considered inexpensive.
  • Densities were low relative to the area IT equipment covered so cooling capacity was not critical.
  • Power was a much smaller portion of cost to run data center than today.

As IT equipment densities and power consumption increased, hot spots (HS) started forming. HS were recognized as damaging. For the last decade, the emphasis has primarily been on getting rid of HS. This has often been done by developing advanced AFM techniques, adding more cooling capacity, and turning down cooling set points.
As the cost of electricity has increased, and more significantly as the cost of electricity has become a much larger portion of operating cost, emphasis has been placed on reducing the power consumption of the cooling infrastructure, the largest consumer of power in the data center other than the IT equipment.

Thermodynamics

The increased efficiency and capacity of cooling units at higher return air temperatures is driving computer room operating temperatures up.

ASHRAE, working with IT manufacturers, have raised the recommended and allowable intake temperature ranges several times. The focus on the maximum intake temperature has led to a lack of awareness of IT equipment intake temperatures being below the minimum recommended. As a result computer rooms often have a very wide range from the lowest intake temperature to the highest. There is a direct relationship between the range and efficiency: the wider the range, the lower the efficiency.

Data

Data from Upsite shows that cold spots are now even more prevalent in data centers than hot spots. Of the last eight data centers reviewed by Upsite, on average 7 percent of cabinets have hot spots and 35 percent of cabinets on average have cold spots. The sites reviewed totaled 84,600 sq. ft. of analysis. Not surprisingly, these same sites have an average rated cooling capacity that is 2.4 times the IT load.

Opportunity

Cold spots reveal an opportunity to improve the efficiency and capacity of cooling units by raising return air temperature set points.

Set points often cannot be raised until overall AFM is improved to eliminate hot spots or reduce maximum intake temperatures if hot spots do not exist. Depending on the cooling unit type and design, efficiency improves by approximately 1% to 4% for every degree F increase in return air temperate set points.

Cooling unit set points are often below the standard conditions they were rated for so they are not able to deliver even the rated capacity. If set points are raised above the standard conditions, often 75° F and 45% Rh, then cooling unit capacity will exceed rated capacity.

Process

Step one is to calculate your site’s Cooling Capacity Factor (CCF), which is determined by dividing the total running manufacturer’s stated cooling capacity (kW) by 110 percent of the IT critical load (kW). This number reveals the utilization of cooling capacity.

Upon determining the CCF of your data center, the second step is to address the 4Rs to ensure that your data center airflow is optimized.

The 4Rs consist of:

1: Raised Floor
Seal all unmanaged openings in the horizontal plane of the raised floor. A thorough effort is required to identify and seal all raised-floor penetrations. Electrical equipment such as power distribution units (PDU) often have large openings that need to be sealed. This effort must be seen to completion because as each hole is sealed, the remaining holes release increasing volumes of valuable conditioned air.

2: Rack
Seal the vertical plane along the face of IT equipment intakes. Blanking panels that seal effectively (no gaps between panels) need to be installed in all open spaces within cabinets. The space between cabinet rails and cabinet sides need to be sealed if not sealed by design.

3: Row
Manage airflow at the row level. Spaces between and under cabinets need to be sealed to retain conditioned air at the IT equipment face and prevent hot exhaust air from flowing into the cold aisle.
Adjust perforated tile and grate placements to make all IT equipment intake air temperatures as low and even as possible. This will include replacing perforated tiles or grates with solid tiles in areas where excess conditioned air is being provided, and adding perforated tiles to areas where intake temperatures are the highest.
All perforated tiles and grates located in dedicated hot aisles and open spaces should be replaced with solid tiles. For high-density rooms and rooms with layout challenges (e.g. low ceilings, cabinet and/or cooling unit placement), partial or full containment strategies may be warranted.

4: Room
In most cases, even with high percentages of excess cooling capacity running, the first three fundamental steps of AFM must be implemented before changes can be made at the room level to reduce operating expenses. A common misconception is that AFM initiatives reduce operating expenses. Improving AFM will improve IT equipment reliability and throughput and free stranded capacity. However, to realize operational cost savings and defer capital expenditure of additional cooling capacity, changes must be made to the cooling infrastructure, such as raising cooling unit set-points, raising chilled water temperatures, turning off unnecessary cooling units, or reducing fan speeds for units with VFD.

This process is best if done as part of a comprehensive facility assessment and remediation plan to assure that all opportunities are realized and no conditions are created that could damage IT equipment.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish