Cooling Capacity Factor (CCF) Reveals Stranded Capacity and Data Center Cost Savings

Today’s modern data center is built around efficiency and the ability to reduce costs by allowing the equipment to run more optimally. When designing a solid data center environment, there are several considerations around best practices. The process of allocating space, the flooring, the power and the equipment is a tedious task which can take time and a lot of planning. During the environment setup, there are further steps to ensuring that your data center runs as optimally as possible. One of those steps is ensuring that heat and cooling are carefully controlled and monitored.

In this white paper, Upsite shows how the cooling capacity factor (CFF) can really help a data center save money and create a more efficiency platform. Data center decisions must be made around logical processes and known metrics. This is why decisions around cooling require a certain amount of knowledge on how these resources are being used.

CFF Cooling Inefficiences

[Image Source: Upsite Technologies]

To illustrate, of 45 sites that Upsite reviewed, the average running cooling capacity is an astonishing 390% of the heat load in the computer room. In some cases there is over a 3,000% capacity of the heat load. In other instances, data centers never really balance their cooling — This can result in an insufficient volume of conditioned air being delivered to the contained space and unsealed open spaces in the cabinets that allow conditioned air to flow out of the space and exhaust air to flow in.

By understanding resource utilization, resource administrators are able to build a better data center. There are several benefits to knowing your CFF and using that to right-size your cooling infrastructure. In Upsite’s white paper, we learn about several of these benefits including:

  • The computer room environment improves.
  • Hot and cold spots are eliminated.
  • The throughput and reliability of IT equipment increases.
  • Operating costs are reduced by improved cooling effectiveness and efficiency.
  • Released stranded capacity increases room cooling capacity, while deferring capital expenditure for additional cooling infrastructure. This enables business growth through the deployment of additional IT projects or other investments.
  • Increase in supported IT load through improved utilization of airflow.
  • Reduced carbon footprint via reduction in utility usage.
  • Deferred capital expenditure by increasing the utilization of existing infrastructure.

Download Upsite’s white paper to learn how to create a more efficient data center cooling environment. Or, if you’re stuck with existing cooling problems, this white paper also describes some key remediation steps to regaining control of the infrastructure. As more organizations turn to data center providers for their hosting needs, there will be greater demand to run a more efficient DC infrastructure. To optimize and save costs, organizations should always remember that data center environmentals are very important efficiency metrics.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the Vice President of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)


  1. How exactly are hot and cold spots eliminated? Is that info in the white paper?

  2. Björn Schödwell

    This is merely old wine in new bottles and obstacles global efforts to harmonize data center metrics .... LBNL has proposed the Cooling System Sizing Factor long time ago: