Rodrigo Flores is the founder and chief technology officer of newScale, Inc. He co-authored the book “Defining IT Success through the Service Catalog” and has led the development of a formal ITIL certification for catalog practitioners.
The arrival of new data center service providers “in the cloud” such as Google, Amazon and Microsoft is changing the face of IT services, and for good reason: it’s increasing choice, while lowering the costs of infrastructure services. It’s also creating a time bomb for IT operations.
IT operations groups are going to be increasingly evaluated against the service and customer satisfaction levels provided by public clouds. One day soon, the CFO may walk into the data center and ask, “What is the cost per hour for internal infrastructure, how do IT operations costs compare to public clouds, and which service levels do IT operations provide?” That day will happen this year.
Many IT operations groups are responding by planning internal clouds so they can have more control and visibility, while taking advantage of their investment in IT resources. While that’s a good direction, it may come too late, at too high a cost, and not deliver the value the customer, or employee, expects. It’s hard to compete on price with groups (i.e., Amazon) who buy truckloads of servers per week.
For IT operations to stack up to public clouds, it must understand that cloud computing is an operating model underpinned by virtualization technology. And just as the virtualization layer abstracts the underlying hardware, there’s a front-office service definition layer which abstracts the infrastructure and operations from the customer.
The control point will be a front office that clearly explains what IT offers, how to contract it, the costs, policies, service levels and provides visibility into delivery from multiple sources. IT teams need to make IT self-service for their own resources as easy and robust as it is for public clouds.
The next three recommendations will help IT operations buy time and deliver assets and learning critical to the success of a private cloud.
First, give up the fight: Enable the safe, controlled use of public clouds. There’s plenty of anecdotal and survey data indicating the use of public clouds by developers is large. A newScale informal poll in April found that about 40% of enterprises are using clouds – rogue, uncontrolled, under the covers, maybe. But they are using public clouds.
The new role for IT is to become the trusted broker that actually enables enterprise agility. It’s not the big that eat the small, it’s the fast that eat the slow.
Embrace the use of public clouds. To do this, IT must define a set of public cloud services, along with authorizations, workload policies, and a governance framework. This is often done through a service catalog.
The advantage is that IT operations can gain visibility over what’s already going on, establish proper security policies, and ensure that some oversight over sensitive data is at least in place. Over time, IT operations can learn about the strengths and deficiencies their customers see in public clouds and come back with new offers that better serve the business.
Define your Model T Ford: Standardize service definitions for development and test now
The biggest challenge to building a private cloud is the need to standardize service definitions and components. It’s not possible to build the IT factory when every configuration is custom tailored; IT operations needs to walk before it runs, and for that it needs to define its Model T before building advanced jets.
A service catalog that has standards, service definitions, and underpinning component services is needed. This service catalog may have items such as “small linux 1.7G Ram, 160G disk” or it can be “Windows SQL development environment.”
Unlike public clouds, these packages would include additional services components such as “Tier 3 monitoring,” “Help desk service,” “Virus protection and removal” and “Domain mapping,” for example. Each of these service components would also be standardized, possibly priced, and orderable through a service catalog.
The value of defining standard services includes:
- Better capacity planning. The factory can get better forecasts if all the components are similar.
- Faster provisioning due to ability to inventory, have vendor on-premise equipment, or just plain move lower priority workloads off.
- Lower costs. Knowing what you need enables better buying.
Service definitions also help communicate the added value that IT operations provides. For example, Amazon does not do virus removal; their help desk is limited and costs extra.
When services are defined as discrete components in a catalog, the conversation with the CFO becomes simpler. Rather than arguing about a cost, it becomes a conversation about the trade-offs between quality, agility, risk, cost, capability and security.
One trick to consider: Adopt service definitions from major cloud providers. After all, they are already market tested and familiar to your customers.
Think like an ATM: Embrace self-service immediately. Bank tellers may be lovely people, but most consumers prefer ATMs for standard transactions. The same applies to clouds. The ability by the customer to get his or her own resources without an onerous process is critical.
If the customer can’t self-service, the service doesn’t exist. Self-service means “Find, Order, Track, Manage, Act” upon those service offerings. Also, the customer needs to have “just-enough-console” rights to carry out life cycle operations against their resources, such as reboot, add storage or other operations.
There are many valid issues and obstacles about enabling self-service such as “what if they consume all the data center,” or “If they see everything we offer, they’ll order it.” But don’t worry. There are many techniques to deal with these issues. For example, the use of quotas or resource pools is useful; as long as the user consumes from the pool, provisioning happens in minutes. Amazon uses this technique.
The value of IT self-service is huge:
- Reduction of cycle time and increased agility for the business.
- Elimination of labor of scarce resources such as enterprise architects.
- Elimination of labor across the change and release management processes.
It would be unfortunate to solely focus on costs and not on the value of IT operations because cloud computing is most importantly about agility, rather than cost.
In fact, moving existing workloads to a cloud may not provide the biggest pay off. More important than marginal costs are new applications, new projects, and new experiments that enable new business opportunities.
Which is why it makes sense to enable public clouds as a partner rather than the enemy.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.