Susan Blocher is VP of Global Marketing for HP's Servers Business Unit.
If there's a defining characteristic of business in this millennium, it's that failure comes faster. Don’t take my word for it; ask the 60,000 or so people who were working in Blockbuster’s 9,000-plus retail stores in 2004. Six years after that peak, the company filed for bankruptcy. Failing to adapt to the changing technology landscape, the on-demand economy and consumer behavior proved to be catastrophic.
There's a lesson here for others. According to recent IT surveys from leading industry analysts and consultants, line of business executives believe that IT will play a substantial role in transforming their business over the next five years. Just enough time for the next Blockbuster to find itself looking for a bailout suitor.
At the same time, emerging technologies such as cloud computing, advanced mobility and Big Data present new business opportunities. The trouble is, too many executives believe their IT organizations aren't equipped to capitalize on these trends quickly enough to deliver differentiated services as they’re created. Simply put, they are saddled with traditional IT systems that are inefficient, slow and manually-driven.
A new approach is needed. Rather than seeing infrastructure as a collection of servers, storage and networking gear, forward-thinkers are aggregating pools of end-to-end Compute resources for use from the edge to core, up and down an integrated workload stack, and with an advanced set of economics and automated operational approaches to power a New Style of Business.
The Compute Paradigm: Flexible Consumption Models
There was a time when technology needed to be a fixed point. Servers and software could be tightly configured to handle a limited number of operations, squeezing cost out of the enterprise. Automation allowed for efficient handling of processes that rarely changed, because they didn't need to. This “one size fits all” approach will no longer work.
In the Compute era, IT leaders need to offer users and departments flexible consumption models for achieving business outcomes. We're already seeing this dynamic at work in the public cloud as online retailers scale up resources to handle the holiday shopping rush. What if this same flexibility were afforded to the business unit manager needing to unify a distributed development team ahead of a key deadline? What if business leaders could simply define their goals and order internal IT resources to support them, on-demand, like any other service?
Financing should be just as flexible. Traditional, top-down IT may work for some companies. Others may prefer a managed hosting model where owned resources are governed and apportioned by a third-party. Others may prefer to rely on the public cloud. At HP, we see a growing number pooling all their in-house gear and software for use as a service that IT leaders broker and departments consume according to budgetary limits.
We don’t see this as a nice-to-have but rather as a strategic imperative. Business moves too fast, especially when so much of it is governed by systems of engagement. Adapting to the users that “engage” with these systems -- from mobile banking and e-commerce to online music stores -- is no longer optional. Systems in the Compute era are designed with this sort of flexibility in mind, breaking the fixed, brittle molds created by their predecessors and built with three distinct characteristics for serving business needs:
- Converged. Discrete servers are ineffective for serving ever-changing markets. Instead, we need pools of resources, virtualized and converged with networking, storage and management that can be shared by many applications as well as managed and delivered as a service.
- Composable. In the Compute era, infrastructure isn't metal, it's fluid. Pools of processing power and storage are captured in a networked fabric and disaggregated so they can be quickly composed to service workloads and then decomposed back into the pool for others to use as the occasion calls for it. Importantly, this work is performed entirely in software, and as such, requires now new architecture in order to implement.
- Workload-Optimized. There’s a reason why legacy IT systems are rigidly implemented. Rigidity, when applied to a specific problem, puts optimal resources at work in the right place. Flexible, assemble-on-demand Compute infrastructures confer this same level of customized performance but without calcifying the underlying system.
The Evolving Enterprise: Predictive, Autonomic Compute Power at Work
How far can we push the Compute model? That remains to be seen, but there’s no doubt we’ve come a long way already. Organizations that used to spend thousands of dollars on licensing to slice up inefficient servers to get more value from them are now designating their IT chiefs to build service bureaus that collect and distribute precious Compute resources where they’re needed, just in time.
We’ve already added analytics capabilities that allow systems to preemptively add Compute power to departments known to need it at certain times of the day or year, like the e-tailer needs extra processing to handle traffic on Black Friday and Cyber Monday. Longer term, we’ll have autonomic systems that mirror the human immune system, applying software patches as if they were white blood cells dispatched to heal a wound, such as a cybersecurity breach.
In that sense, Compute isn’t so much a technology model as it is an approach that’s flexible, service-oriented and designed to capture opportunities as they happen -- and head off disasters before it’s too late.
Don’t let your company fail to capture the opportunities when the technology is at your doorstep. Get ready for the next era – the era of Compute.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.