Insight and analysis on the data center space from industry thought leaders.
Cooling Trends: Cost-Cutting Opportunities
The latest technology advances and best practices are changing the cooling practices and approaches in the data center, writes Jeff Klaus of Intel Corporation.
July 8, 2014
Jeff Klaus is the General Manager of Data Center Manager (DCM) Solutions, at Intel Corporation. He can be reached at [email protected].
In the future, when you open the door to the data center, will you still hear that loud hum of the air handlers? Will the temperature drop as you step over the threshold? Some IT and facilities managers accept as inevitable that data centers have to operate between 64 and 68°F and that cooling systems have to scale in direct proportion to server and storage expansion. Fortunately, they are wrong. The latest technology advances and best practices are changing the cooling practices and approaches in the data center.
Smarter hardware and middleware
Data center equipment providers have responded to spiraling energy costs by building in more intelligence for thermal and power monitoring. Step one to harnessing cooling costs: start monitoring these smart devices. Real-time temperature can point to hot spots where cooling needs to be adjusted. Snapshots during low and peak periods of activity can also help data center managers gauge requirements for planning purposes.
More important, the fine-grained information makes it possible to track cooling efficiencies over time. By leveraging middleware that automates the collection and logging of the temperature and power consumption data, patterns can be extracted and analyzed. The same middleware technology can drive intuitive dashboards, with data displayed in the form of thermal and power maps.
Closed-loop control
Some of the same technology that can gather real-time temperature and power data can be used to adjust equipment on the fly and lower the demand for cooling. The world’s largest data centers employ power capping and dynamic workload management to respond to fluctuating demand while keeping energy – and therefore temperature – within pre-defined thresholds.
While tracking real-time conditions, data center managers can also employ energy management solutions that let them control server performance levels. Slower clock speeds reduce power draw and dissipated heat. When balanced against user requirements, subtle tuning of server speeds has been proven to significantly lower energy consumption and cooling requirements without affecting user experiences.
The combination of monitoring, power capping and dynamic server performance enables many cost-cutting practices. Identifying and minimizing the number of idle servers, for example, can reduce energy and cooling requirements by 10 to 15 percent in typical data centers. In general, the new approaches help data center managers avoid overprovisioning of both compute power and the related cooling systems.
Redefining “normal” operation
Armed with accurate data center energy and cooling data and closed-loop management, IT and facilities teams have been turning up the thermostats in data centers. Vendors have responded by confirming reliable operation at higher temperatures.
Bottom line, for every degree that ambient temperature is raised, the cooling costs drop by ~4 percent. Small temperature change can yield major savings, and with monitoring in place, data center managers can easily experiment while minimizing risks to equipment life and therefore service continuity.
The cost of cost-cutting
How practical is real-time monitoring, which is the key enabler for the cost savings just overviewed? Since the granular energy and temperature data can be collected programmatically, the solutions that unlock efficiency improvements do not require expensive hardware overlay networks. Additionally, the best-in-class middleware technology is hardware agnostic. This is driving energy management solutions and approaches that can be applied across hardware sold by Dell, HP, IBM, Intel, among others.
The business case for a dashboard that puts IT in control of power and cooling costs offers short payback periods. In addition, the middleware simplifies both deployment and ongoing support for these solutions. Without any major obstacles for adoption, IT and facilities teams should target the reduction of cooling costs as an achievable short-term goal with very long-term benefits.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like