Core4 Launches Cooling Product Line

1 comment

A view of Core4 products in use at Sonic.net in Santa Rosa, Calif.

A view of Core4 products in use at Sonic.net in Santa Rosa, Calif.

Data center cooling is a tough market for a start-up to crack, dominated by large industrial companies with long track records in the market. But Core4 Systemsis entering the cooling equipment market this week with a line of computer room air conditioners (CRACs), air handlers and chiller systems.

Core4 is emphasizing energy efficiency, and touting its systems as replacements for existing products in retrofits to expand data center cooling capacity. The company says its systems can help data center operators increase the density of equipment in their racks, which allows companies to expand their computing capacity without building additional space.

Core4 is based in Napa, Calif. and has financial backing from an angel investor, according to VP of business development Jamien McCullum. Company president David Nurse was previously an executive with Ingersoll Rand , while chief technical officer Rick Cockrell was an energy design specialist with Bell Products Inc.

Core4 has deployed its equipment at one existing data center. Sonic.net, an ISP and colocation provider in Santa Rosa, Calif., says the savings realized by the Core4 systems led PG&E to award Sonic.net an energy rebate of $129,000.

Core4′s primary differentiator is the use of “refrigerant-side economization,” which the company says can dramatically reduce the energy used by refrigeration compressors. Core4 is seeking a patent on its reduced compression system, which uses variable pressure (floating-head pressure controls) to take advantage of low air temperatures to reduce the amount of work for the compressor, allowing the head pressure to vary with outdoor conditions.

Core4′s air handler units include a “Scavenger Coil” in the air stream before the main coils. The output of the Scavenger Coil is piped directly back to the condenser anytime the condensing temperature is below the return air temperature. Core 4 also says its products can reduce on-site water usage by 28 percent. 

Some of Core4′s energy savings claims are based on comparisons with air economizers that assume a higher rate of equipment failures due to environmental conditions in the data center. 

A detailed overview of the Core4 system’s operation and a comparison with systems previously used at Sonic.net is available from the Core4 web site.

Sonic.net spent $618,000 installing Core4 systems to replace three 30-ton CRAC systems from another vendor. The systems have been in place for 18 months, with an estimated annual energy savings of $129,000. Core4 projects that those savings, when combined with the PG&E rebate, result in a project payback time of 3.5 years.

“By replacing our incumbent Liebert CRACs with Core4 systems, we’ve cut our cooling-related energy consumption by an astounding 72 percent, are saving more than $11,600 per month, and have cut our power utilization efficiency (PUE) rating from 1.82 to 1.25 without sacrificing our temperature or humidity control,” said Dane Jasper, CEO with Sonic.net

“Despite the fact that fast growing data centers are emerging as one of the biggest energy consumers in any industry, relatively little attention has been paid to the impact cooling solutions have on annual energy spending,” said Nurse. “With today’s announcement, we are breaking down the barriers to understanding problems associated with data center cooling while introducing an elegant solution that delivers a radical improvement over legacy approaches.”

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)

One Comment

  1. I refer readers to the work of The Green Grid – PUE – the initiative to have a standard calculation for the ratio between energy used by the compute devices and other energy defined – the largest being cooling. A computing device is, from a heat perspective, a variable heater. As a general guide the server uses 50% of it’s energy doing no compute work (fans, memory etc) and the other 50% in direct proportion to the cpu utilization. You do not need to measure the temperature in a data center to know how much additional cooling is required. Simply take the starting temperature and measure the watts watts being delivered from the power supply system = the heat being created = heat to be removed. I write this to encourage data center managers looking at ways to save energy, to both look at the cooling systems, and to look at some of the open source software initiatives to measure and graph energy use. open4energy ( http://open4energy.com ) is an open source energy monitoring project built on Cacti and RRDtool, both proven platforms for graphing time series data.