By now, the potential benefits of moving workloads from the data center to the cloud are fairly well known. They include reduced time to market, increased flexibility to scale with demand, accelerated innovation, better reliability and faster recovery when applications go down.
These various benefits are often collectively simplified into an overly simplistic theme: the cloud will save you money. But this is a common misunderstanding. Take it from a cloud infrastructure provider; while in some cases moving workloads to the cloud can provide cost savings, that isn’t always the case and can vary from organization to organization and even workload to workload.
It’s easy to see the allure of the cost of savings argument, though. Back in 2017, the US Chamber of Commerce estimated that a typical enterprise data center cost $215.5 million to build (including the cost of land, construction and IT equipment) and another $18.5 million to operate annually. Those costs have almost certainly gone up significantly since then. Coupled with estimates of data center utilization rates hovering somewhere between 30-50% (at best) and 30% of servers being completely idle at any given time, that’s a lot of capital to tie up in infrastructure.
While cloud hosting can save organizations money on hardware purchases, maintenance and upgrades in addition to the data center hosting fees (power\cooling\storage), there are still costs involved, including the cost of the migration to the cloud itself. Organizations considering a cloud migration would do better to consider cost control (and an enhanced ability to capture revenue opportunities) over cost savings in their decision-making process.
Shifting from CAPEX to OPEX
That said, for many organizations – including those that rely on applications running on the IBM Power platform – the cloud could offer a better money investment. In particular, the pay-as-you-go (PayGo) model enables organizations to right-size their logical partitions (LPARs) to fit their typical workloads – with the option to scale up to meet peak demands and scale down during lulls – instead of having to design, deploy and maintain all the infrastructure needed for peak demand. That’s particularly advantageous given the typical data center utilization rate mentioned above.
By adopting a subscription-based, cloud hosted framework, organizations can shift IT costs from a capital expenditure (CapEx) model to an operating expense (OpEx) model. That provides both greater predictability and the opportunity to spread costs out over time.
Reducing Modernization costs
With a typical lifespan of 2-5 years, data center infrastructure – including servers, routers, power supplies and other equipment– must be updated regularly, at significant cost. By contrast, since IaaS providers continuously upgrade their equipment and build those costs into their pricing, organizations can take advantage of the latest hardware – and get increased application performance – without having to pay for the recurring cost of upgrades. IaaS providers also handle patching and other maintenance tasks (potentially providing better security), eliminating another expense that organizations would need to manage and pay for in a traditional on-premise model.
Transforming IT from a cost center to a business enabler
For most organizations, IT is a cost center. In a traditional on premise data center model, IT costs are notoriously difficult to attribute to specific business units, functions or initiatives. As a result, no internal business customers are required to “pay” for – or even understand the magnitude of – their compute and storage resource consumption. That makes IT costs a difficult to understand “black box.”
At the same time, it takes both time and money to upgrade an on-premise data center. New business initiatives can be delayed or market opportunities missed if they require organizations to undertake an expansion of data center capacity. But without a direct way to calculate bottom line ROI from IT investments, it can be difficult to get internal buyoff for necessary and beneficial IT upgrades.
But when workloads are running in the cloud, consumption-based pricing makes it much easier to allocate expenses to specific business units and understand the true IT costs of new initiatives. That means organizations can more easily implement an IT cost chargeback model and create direct incentives for internal customers to optimize their resource usage and improve efficiency.
While cloud costs won’t always be cheaper than traditional data center costs, they can be better predicted and adjusted to business needs in real-time. On demand IT resources can be spun up and wound down dynamically, shortening time to market and enabling organizations to seize near-term revenue opportunities without incurring either up front CapEx or long-term support costs.
For organizations determining whether to migrate a workload to the cloud, it is not always a simple matter of whether it is more or less expensive than running it in an on-premise data center. There is also the question of whether to lift and shift or refactor applications to be cloud native, which introduces additional elements of cost and risk. It may make more sense for organizations with legacy workloads running on, say, IBM i or AIX that are fully optimized for those platforms, to replicate those environments in the cloud as is and slowly update them. Once LPARs are cloud-hosted, it becomes possible to leverage cloud-native solutions that allow organizations to de-silo Power workloads and take advantage of capabilities such as advanced analytics, automated backup and disaster recovery, as well as improved DevOps practices with automated CI/CD pipelines. So while they might not see cost savings from running workloads in the cloud, organizations do end up seeing better ROI from their workloads and data.
As they look to the cloud, organizations need to think more about the advantages of cost control rather the glitter of cost savings. A predictable cost model makes it easier to evaluate the benefits and risks of a cloud migration.
Matthew Romero is the Technical Product Evangelist at Skytap, a cloud service to run IBM Power and x86 workloads natively in the public cloud. Matthew has extensive expertise supporting and creating technical content for cloud technologies, Microsoft Azure in particular. He spent nine years at 3Sharp and Indigo Slate managing corporate IT services and building technical demos, and before that spent 4 years at Microsoft as a program and lab manager in the Server and Tools Business unit.