Posted By Industry Perspectives On February 27, 2013 @ 9:32 am In Industry Perspectives | No Comments
Dick Benton, a principal consultant for GlassHouse Technologies , has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.
In my last post , I outlined the fourth of seven  key tips IT departments should follow if they want to begin building a better service strategy for their internal users: advertise the Ts and Cs. That means developing a simple and easy-to-read list of the terms and conditions under which IT services are supplied. Now we will address actually building the order process, so that services can be provisioned in an automated way that can satisfy today’s on-demand consumers. One of the major cloud differentiators is its ability to support self-service selection followed by automated provisioning; in other words, being able to offer services on a Web page which will trigger scripts to automatically provision the selected resource.
Historically, the largest roadblock to an expedited provisioning process is the “legacy” approach to service fulfillment. Typically, when the busy consumer seeks access to a resource, he or she is faced with a daunting obstacle course of approvals. Usually, this will involve a lengthy form to complete (sometimes even an online form) with an explicit rationale for the request, along with minute detail on the amount of resources to be consumed and information like peak loadings. Some organizations include an additional section for risk management covering the impact or association the resource may have with various compliance legislation and internal compliance regulation. Other sections will provide space for approval of active directory additions, network access, storage utilization, backup protection and even disaster recovery. Often there will be a section covering the data to be used with the resource and a questionnaire on this data covering its corporate sensitivity and the need for encryption.
Although the process seems to place every conceivable obstacle between the consumer and the resource sought, the process is typically one that has been built over time. Each checkpoint probably developed in response to some painful, embarrassing or uptime-impacting event in the past. The process is designed to specifically eliminate the possibility of those past events recurring. The “cover-your-backside” process is not uncommon in IT procedures.
Just-in-case provisioning is designed to ensure that the risk of any inappropriate or unauthorized allocation or usage is reduced to near zero. In the past, this has been the foundation of provisioning time frames lasting weeks and even months. Also, it’s why IT is often seen as an impediment rather than an enabler of the business. These forces have also driven an evolution in the defense mechanisms of the resource requestor. The resource requestor learned what the “right” answers were in order to get their request through the process with only a three- or four-week delay. Again, typically there is no detection mechanism nor any procedure for withdrawal of a resource found to have been used in a manner that is different from its “planned” purpose.
As is usual with many procedure-bound organizations, there is little, if any, formal policy behind the procedures, and in the rare instances when there is a policy, it is even rarer for there to be an enforcement detection and response mechanism. The resource requestors soon learn the thresholds they must not cross in the request process, knowing that once the resource is assigned, it can be used as they see fit.
So now, the unintended outcome is that IT have trained the resource requesting community to submit resource requests that bypass the various security, access, network and asset management checkpoints to ensure a speedy (only weeks) allocation.
And then came Amazon! Now, the resource consumer can simply go to a Web page, peruse the offerings, select an offering and supply a credit card, and minutes later, if not sooner, the resource is available for use. Of course, the innovative resource consumer soon finds a way to expense the costs, particularly as they are able to produce results for their management in days instead of months. In many cases, this resource consumption takes place without IT being aware that their once captive consumer has now deserted them for a resource vendor who can actually accommodate their needs.
For IT to operate its environment as a cloud resource, it must find a way to eliminate the old legacy approval process. One way to do this is to provide pre-authorized services. This means the effort of authorizing a resource is now focused on the service itself, before it is made visible in the catalog, rather than at each point of consumption.
Supported by strong policies on the corporate use intended for each service offering, this approach can place responsibility solely in the hands of the resource consumer rather than IT. This sea change in procedure and policy will require a close collaboration between the CIO and the business units, as the CIO will need the most powerful voices from the business community to break through bureaucratic legacies in risk management and internal IT management.
For IT to be competitive, the procedures must be able to execute within the same time frame as its public service competition, that is, in real time. Service catalogs need to be available on the Web and support self-selection and some form of automated workflow for authorization. Once selected and authorized/paid, the process of provisioning needs to be equally streamlined, with pre-authorized workflows that can automatically be triggered and executed to supply the resource requested. This painful change is necessary so that IT can be seen as a positive enabler working to provide resources on demand to the dynamic business consumer.
In my next post, I will outline how IT can go about proving what was delivered. If you are going to supply services in various categories and under various service level agreements, you need to implement monitoring and metrics to demonstrate that you have met your commitments, both to clients and your own management.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process  for information on participating. View previously published Industry Perspectives in our Knowledge Library .
Article printed from Data Center Knowledge: http://www.datacenterknowledge.com
URL to article: http://www.datacenterknowledge.com/archives/2013/02/27/fifth-key-to-brokering-it-services-internally-building-the-order-process/
URLs in this post:
 GlassHouse Technologies: http://www.glasshouse.com/
 last post: http://www.datacenterknowledge.com/archives/2013/01/22/fourth-key-to-brokering-it-services-internally-advertise-the-ts-and-cs/
 seven: http://www.datacenterknowledge.com/archives/2012/08/09/seven-tips-to-keeping-it-competitive/
 guidelines and submission process: http://www.datacenterknowledge.com/industry-perspectives-thought-leadership/
 Knowledge Library: http://www.datacenterknowledge.com/archives/category/perspectives/
Copyright © 2012 Data Center Knowledge. All rights reserved.