Michael Jackson is a co-founder of Adaptive Computing and directs Adaptive Computing’s strategic planning and cross company coordination, with an added focus on business and partner development.
Increasingly, cloud computing has become an IT priority for virtually every organization in business today. According to a June 2010 Pew Research Center survey, a decisive majority of technology professionals predict that, by 2020, most people will access software applications online and work through remotely-accessed server networks. SaaS (Software as a Service) cloud applications such as Salesforce.com and NetSuite are standard, acceptable alternatives to traditional client server enterprise applications. A new competitor to Amazon’s EC2 service surfaces almost every day, and virtually every technology company is incorporating the word “cloud” somewhere in their description.
Given these rapid developments in cloud technologies and services, what are some of the key components to watch for as your enterprise moves into this new IT architecture?
The technological foundation of a cloud computing environment lies within the decision engine. A unique multi-dimensional and predictive decision engine is one that dynamically manages workloads and resources so cloud environments self-optimize to produce more results, with greater speed and efficiency. Having a limited static cloud, with no decision engine adversely impacts the scalability/elasticity of your cloud environment, the largest benefit to deploy solutions in the cloud. If your workload peaks during an off-hour, is your cloud able to anticipate the need to provision more services? Or will it collapse under the weight of higher than anticipated demand?
Having a multi-dimensional decision engine, with the ability to dynamically provision for an unexpected spike in demand, not only affects your cloud’s uptime under Service Level Agreements (SLAs), but also the speed with which your cloud can make crucial decisions.
To address the rapid nature of cloud computing decisions and processes, a quality cloud must be self-optimizing. A self-optimizing cloud is one that requires minimal to no manual input or effort. Instead, a self-optimizing cloud determines on its own, how to be more efficient. Having an open and flexible management abstraction layer integrates the multiple dimensions of infrastructure data, orchestrating the chaos of complex heterogeneous IT environments and maximizing control and optimization.
Self-optimization begins at the basic level of reporting. With consolidated cloud event reporting, an administrator is provided real-time visibility into resource and workload events and issues to better understand and optimize the performance of their cloud environment.
Static vs. Dynamic
A dynamic cloud, as opposed to a static one, is future-predictive and is able to anticipate workload demands. A static cloud is manual-intensive, reacting to changes in workload, rather than proactively predicting higher demands. Dynamic cloud environments are agile – providing quick delivery of business services and optimal resource provisioning to avoid failures. They are automated – delivering minimal IT interaction. Finally, a quality dynamic cloud must be adaptive –self-optimizing, responding to changing conditions, without manual intervention, to optimize delivery of services.
The dynamic cloud allows IT departments to deliver applications, expand resources and keep services running optimally – maximizing up-time and prohibiting the need for expensive and time-consuming IT staff maintenance.
The Road Ahead
Having a multi-dimensional decision engine, a self-optimizing cloud and a dynamic infrastructure can produce a significant competitive advantage for your enterprise. Already a priority in much of the private and public sectors, cloud adoption is expected to skyrocket within the next five years. Instead of merely catching up to new trends and staying on top of the latest technology, a successful cloud strategy looks ahead.
Well-designed cloud computing strategies change the business landscape by freeing up the raw materials of innovation: people, time and money. The “one size fits most” environment eliminates a good portion of the complexity, time and effort usually involved in deploying traditional IT services. This allows IT staff to concentrate their efforts on developing new initiatives, instead of merely maintaining pre-existing projects. In today’s hyper-competitive business environment this approach can make all the difference.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.