Measurement and control are key to optimization, and that of course applies to data center optimization too. That’s what DCIM software provides, tools for measurement and control.
While most data center managers agree that Data Center Infrastructure Management tools can help optimize data center operations, purchasing DCIM software often becomes a low budget. Fortunately, with some time and effort, there are ways to implement DCIM without the expenditure of off-the-shelf software. Here we’ll explore the strategies organizations can use to optimize data center operations on a shoestring budget.
In the first part of this two-part series, we’ll focus on the measuring aspect of DCIM. Specifically, we’ll discuss methods for collecting asset intelligence, benchmarking efficiency, and leveraging existing monitoring. In the second part, we’ll explore control and optimization.
For guidance on DCIM software, visit DCK’s DCIM InfoCenter
Collecting Asset Intelligence
A first step in implementing any DCIM program is to get a firm understanding of data center assets by undertaking a thorough inventory process. Although this step is time-consuming and tedious, the benefits of collecting asset data will be realized immediately. The key is to collect the right data and record it in a user-friendly way. The best practice is to create spreadsheet to ensure you are collecting the right asset data and classify it similar to the way DCIM software would.
Begin by creating a spreadsheet with five tabs labeled: Locations, Cabinets, Freestanding Equipment, Rack-Mounted Equipment, and Blade Equipment.
• The Locations tab should have at least seven columns with the following headings: Country, State, County, City, Building, Floor, and Room. Although you may only focus on one data center, this location information ensures that if any data centers are added in the future, each will be uniquely identified.
• Within the Cabinet tab, create column headings that includes Room Name, Cabinet Name, Asset Tag, Make, Model, Generation, and Grid Location. Room Name should match one of the names listed in the Locations tab. If your data center doesn’t use asset tags on cabinets, or doesn’t have a raised floor, or grid system, leave these fields blank when performing the inventory.
• The Freestanding Equipment tab should also include Room Name along with Name (name of the equipment), Serial Number, Asset Tag, Asset Type (such as server, storage or network), Make, Model, Generation, Grid Location.
• The Rack-Mounted Equipment tab will be used to identify all devices mounted in server cabinets. The column headings to include are: Name, Serial Number, Asset Tag, Asset Type (such as server, storage, chassis, network, or power strip), Make, Model, Generation, Grid Location, Room name, Cabinet, U. If equipment is mounted vertically such as power strips, record the U location as 0.
• Chassis-Mounted Equipment tab will identify all blades within chassis. The column headings will be the same as Rack-Mounted Equipment with a few exceptions. A column called Chassis Name should be included. Also, instead of identifying the U, the location of blades should be identified by the slot location within a chassis.
As mentioned, collecting equipment data without DCIM software is grueling, but this data can be used to produce some very powerful information and reports. For example, armed with complete and accurate make, model, and generation data, companies can identify older generations of equipment that should be slated for tech refresh. Following are some additional ways this data can be leveraged.
Power consumption per server cabinet can be estimated based on equipment make and model. Most IT equipment manufactures include estimated average power draw within “tech specs” on their websites. If only maximum power draw is listed, fairly accurate average power draw can be estimated by multiplying the max by 66 percent, or .66. Once all device power is added up within each cabinet, imbalances in power density are revealed. This is great information in identifying over-subscribed power circuits and/or identifying possible cooling issues such as hot spots.
Although often overlooked, weight capacities are a critical part of data center capacity planning. Similar to power consumption, equipment weight can be found within tech specs. Adding up equipment weight along with the server cabinet weight can ensure total weight remains within floor load thresholds.
Reports showing open ports vs. used ports can be produced if patch panels are included while collecting inventory. The lead time between purchase and installation can be a very long for new patch panels and trunk cabling. Having this port information provides the necessary warnings so that additional ports can be added well in advance and new equipment can be racked and cabled without delay.
Equipment data can also be used to generate front views of server cabinets and racks instead of DCIM software visualizations. These views are a great way to see how full cabinets are. Within spreadsheets, cells can be stretched horizontally and reduced vertically to represent each server cabinet or rack U space. Cells can be filled in with colors and text to represent equipment occupying U spaces. Also, images of the equipment can be added to the cells for a more realistic representation.
Benchmarking Data Center Efficiency
Power usage is a window into your data center. Understanding usage patterns as well as identifying fluctuations can tell you a lot about the efficiency of your operations. By benchmarking and monitoring power you can identify areas of improvement as well as predict and prevent issues from occurring.
The first step is to benchmark. If monitoring is connected to all power supplied to the data center, including what is needed for lights and cooling, reports showing data center efficiency can be created. Record this amount of power supplied to the total data center load dedicated to the data center’s lighting, cooling, IT equipment, etc. Then record power supplied exclusively to IT equipment. There are several energy efficiency metrics, such as PUE (Power Usage Effectiveness), or the newer Mechanical Load Component (MLC) and Electrical Loss Component (ELC). Being consistent with one of these methods is what matters most when calculating how managers are taking steps to improve the data center.
If power monitoring is non-existent or sporadic, there may be another way to show cost savings. Power utility bills are sent with kWh usage for each feed into a building. Ideally, the data center has at least two dedicated utility feeds, separate from the office-space power. Power usage will naturally fluctuate due to a variety of factors, such as outdoor weather, but kWh fluctuations in the data center can also be due to changes in IT load. As changes such as large decommissions or efficiency improvements are implemented, the effect may be shown on the following month’s utility bill. Annual cost savings can then be estimated by multiplying this drop in kWh by the utility rate.
For an example of how power monitoring information can improve operations, use the information to project cost savings. Suppose you’re considering swapping out old servers with new ones in order to cut energy costs. A way to prove gains before actually committing to a large-scale server refresh is to measure and record power draw on just one of the older servers. Then, replace this old one with the new server. Compare the old server’s power draw to the new server power draw. The power difference can then be multiplied by the per-kWh utility rate to get an accurate daily, monthly or annual savings per server. Total savings can then be accurately projected for the project by multiplying that number by the number of servers being considered for replacement.
Leveraging Existing Monitoring
Beyond power consumption monitoring, data centers often have a mixture of monitoring points waiting to be put to use. UPSs, CRAC units, PDUs, and power strips typically have unit controls that provide sensor data, alarms, and real-time monitoring data. DCIM software usually collects this data, but it can also be accessed simply by scrolling through the multiple pages of data shown on the equipment’s display screens. Usually, this data is also accessible through the network and internet via device IP addresses. If, for example, this data is available for PDUs, exact power loads on circuits feeding server cabinets and freestanding equipment would be known. Exact power loads would also be known for IT equipment plugged into “smart” power strips that have either metered or switched outlets.
With some computer programming skills, relatively straight-forward programs can be written to collect this monitoring point data and display it in a user-friendly format. This monitoring data can be refreshed frequently to provide up-to-the-minute readouts from several pieces of equipment. Reports and graphs showing historical trends along with impact to capacity as equipment gets added or removed can be generated.
These monitoring points can be used to improve operations in multiple ways, such as finding hot spots. Smart power strips usually include environmental monitoring ports. External temperature sensors can be added and placed in cold aisles. With sensors placed in multiple racks, thorough and accurate temperature readings can be provided, which can enable data center managers to rebalance heat loads by relocating servers from cabinets that are too hot to cabinets that are underutilized. These power strip temp sensors can also save significant money if their readings show cabinets are over-cooled. If cabinets are in fact too cold, turning up the data center temperature can equate to huge cost savings by reducing energy needed to cool the room.
Purchasing DCIM software may not be in the budget. However, significant steps can be made to optimize data center operations without capital expense. The first step is obtaining a clear understanding of assets through spreadsheets, measuring power through existing monitoring and/or equipment power estimates to benchmark efficiency. With that data in hand you can undertake strategies to control and optimize operations.
About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.