Skip navigation

Sixth Key to Brokering IT Services Internally: Prove What You Delivered

Dick Benton of Glasshouse Technologies explains how to prove what you delivered, because without metrics, monitoring and reporting that demonstrates you’ve fulfilled Service Level Agreements (SLAs), your service consumers and your management won’t know that you’ve met your commitments. This is one of the seven key tips to becoming more IT service-oriented.

Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.

Dick Benton GlasshouseDICK BENTON
Glasshouse

In our last post, I outlined the fifth of seven key tips IT departments should follow if they want to begin building a better service strategy for their internal users: building the order process. That means developing an automated method for provisioning services via a Web console that can satisfy today’s on-demand consumers.

Measuring and Communicating Your Outcomes

This post covers how to prove what you delivered, because without metrics, monitoring and reporting that demonstrates you’ve fulfilled Service Level Agreements (SLAs), your service consumers and your management won’t know that you’ve met your commitments.

Service offerings and the subsequent signed SLAs will typically contain two types of service delivery metrics. The first group comes under quality of service and may include performance (e.g. IOPS), availability (scheduled hours) and reliability (number of nines). The second group covers protection attributes including operational recovery point and recovery time objectives, as well as disaster recovery point and recovery time objectives. Some organizations also include a historical recovery horizon and retrieval time as service attributes. Service offerings may also typically offer some level of compliance or security protection, and most importantly, the offerings should include the cost of the deployable resource unit of the service offering.

Determining KPIs

It is very important that the process to establish service offering metrics includes the very people who must execute to the key performance indicators (KPIs) around each service. The operations staff must strongly believe in its ability to deliver to the target metrics. This is not the time to allocate stretch goals. In fact, nothing is more detrimental to consumer satisfaction (and IT morale) than IT failing to meet a published goal. Initial metrics must be absolutely achievable, and operations people must believe that they have an excellent chance of meeting those targets. Once operations have settled in, and the bumps have been worked out, then the process of using upper and lower thresholds and tracking actuals within the desired ranges can start to drive improvements and a better service level for the next service catalog publication. This means IT is now visibly improving its service levels and thus consumer satisfaction.

Determining how to measure service attributes can require some creative thinking. You need a metric that can actually be captured and trended. The service indicators can be measured relatively easily for servers, storage and networks. However, operational protection service indicators can be more challenging. The dimension of time frame is also important. For example, will your metric offer a standard for a single point in time, a trend between upper and lower thresholds during the operational day or a standard at peak periods of the day? It is important to focus your choice of metrics on measures that the end consumer can understand and value. If you are going to differentiate between services based on such metrics, they need to be in “consumer speak” rather than “IT speak.” Formulating an appropriate policy on metrics, their time frame and their reporting should be a fundamental part of your service catalog

Realistic Measurements

The prudent CIO will take steps to ensure that each of the attributes mentioned in the service offering (as detailed in the organization’s service catalog) can be empirically tracked, monitored and reported. These indicators should be established with target operations occurring between upper and lower thresholds. Using a single target metric instead of upper and lower thresholds can inhibit the ability to intelligently track performance for continuous improvement, and can result in a potentially demoralizing black-and-white picture for the operations team. In other words, you either made it or you didn’t. With a range of “acceptance” metrics, the IT organization can ensure their own “real” target is smack in the middle of the acceptable range, with consumer expectations set at the lower threshold. It is important to ensure that the end consumer perceives the lower end of the range as an acceptable service level for the resource they have purchased. This approach gives IT some wiggle room, while the system and the processes and people supporting it go through the changes needed to deliver effective services. More importantly, it also provides an incentive to rise above the target with service level improvements.

Now, Measure!

Now that you know exactly what it is you are measuring and how the attributes will be measured, you have a specification for selecting an appropriate tool or tools to support your efforts. Unfortunately, finding the tools to produce the metrics can be a challenge. There are few, if any, that can work across the range of infrastructure and the vendors who provide it. Typically, more than one tool is required. Many organizations have chosen a preferred vendor and stick with that vendor’s native tools, while others have selected two or more third-party tools with the hope of staying viable as vendors constantly enhance and improve their products. However, at the end of the day, a simple combination of native tools and some creative scripting will provide all the basics you need.

Finally, the prudent CIO will develop and publish a monthly “score card” showing which divisions or departments are using which service offerings, how much those service offerings cost, and most importantly, how IT performed in meeting its service level objectives for the period and in comparison to the previous reporting period. This provides a foundation on which new relationships and behaviors can be based, with IT being able to empirically prove that they delivered what they promised, and in some cases, beat what they promised.

This is part of a seven part series from Dick Benton of Glasshouse Technologies. See his first post of the series.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish