Insight and analysis on the data center space from industry thought leaders.

What Does Cloud Computing 2.0 Look Like?

Most in the tech industry have seen what defines Cloud Computing 1.0. Pete Johnson of ProfitBricks writes that while it’s better than traditional hosting, it’s still not all it could be. Not by a long shot. He explains what Cloud Computing 2.0 looks like.

6 Min Read
Data Center Knowledge logo

After a 19-year career with HP that included a six-year stint running Enterprise Architecture for, as well as being a founding member of HP’s public cloud efforts, Pete Johnson joined ProfitBricks in February 2013 as Senior Director of Cloud Platform Evangelism. You can follow him on his ProfitBricks blog.




There’s been a lot of coverage in the tech press lately about "per minute" billing of cloud services, which pushes the envelope of flexibility and the method may be putting pressure on Amazon to do the same. But what’s next? It’s fair to say that, after seven years of cloud computing, we’ve seen what Cloud Computing 1.0 is about. While better than traditional hosting, it’s still not all it could be. Not by a long shot.

What does Cloud Computing 2.0 look like? Here are some ideas:

1: Choose # of CPU cores, RAM, and amount of disk space independently

How cloudy is it really when your IaaS provider makes you pick from a list of cookie cutter sizes that makes life easier for them instead of more flexible for you? Really, think about it. How can a service provider force you to pick what is right for your app or database? For most IaaS providers now, it’s like buying a car -- when you want to get the leather seats but they are only available in the package that also has a sunroof that you don’t want. Why use an IaaS platform that makes you pay for resources you don’t need? How very 1.0!

If you’re using a public cloud provider today, go through the following experiment:

Pick one of your larger servers and look at the CPU utilization. Then look at the memory utilization. Finally, look at how much ephemeral (temporary) disk space you are actually using and divide it by the amount you had to pay for when you selected the instance size you did. Add up the percentages and divide by 3, one for each of the 3 dimensions of your server. That’s the percentage of your money that you’re wasting on that VM. Cloud Computing 2.0 allows you to embrace flexibility and pay for exactly what you use.

Find an IaaS provider that lets you select the number of CPU cores, RAM, and amount of block storage disk space independently from one another. That way, you can size your system to your specific needs instead of trying to take your square peg of a workload and wedge it into a round hole of an instance size.
Plus, what about when your workload changes?

2: Scaling has two dimensions, we just all forgot about the vertical

The ability to hot swap memory, which you could add without interrupting a running server, has been around for a very long time and the concept of scaling
vertically by adding resources to an existing server is hardly new. So why, when we all started moving workloads to cloud, did we all forget about this as an option? The answer is simple: first generation clouds like Amazon can’t do it, that’s why. Cloud 1.0 providers forced customers to scale horizontally – ideal for their profits, but not for the apps and the folks that manage them.

Why limit yourself to a provider that can only allow you to scale by adding more ill-fitting instances to your collection of virtual machines? Plenty of workloads benefit (hello, traditional relational database) more by simply adding more CPU cores or memory to an existing system rather than adding more instances. Does using a single scaling dimension make sense when you can double your possibilities?

Second generation IaaS providers realize this and include vertical scaling without a reboot as a standard feature of their core offerings.

3: Better and more consistent performance through dedicated resources

Here’s a common scenario in a first-generation cloud. First, launch five VM instances. Then, perform benchmark testing on all five. Throw four away and keep
the one good one.

Why do people do this? Because over-provisioning (putting more virtual CPUs than there are actual CPUs on a physical server) and a wild assortment of mis-matched commodity hardware lead to inconsistent performance in first generation IaaS.

We’re told to code around this or be creative in our deployment tools, but should we really settle for that? A second-generation cloud is more creative in its virtual resource provisioning.

Dedicating CPU cores and RAM to a specific VM from pools of resources with better hardware quality as a foundation can be achieved using better virtualization techniques than Cloud1.0 providers use today. That means better and more consistent performance for customers.

4: Ease-of-use: It should look like Visio

When you design an application architecture and the machines that comprise it, what do you do? Most people use a tool like PowerPoint or Visio to graphically represent components and use connective lines to show their network connections or data flow. So why do all the major IaaS providers still use lists of items in tables with check boxes and make you mentally connect them? Instead of forcing people to visualize components, just represent them visually.

Cloud Computing 1.0’s core audience was the developer, who is trained to think of the world as a set of abstract concepts that can be mentally linked together. With global IT spend at roughly $4 trillion and public cloud revenues at around $4 billion, a big chunk of the other 99.9 percent in available market needs to cater to a broader audience. Cloud 2.0 doesn’t ask people to make mental connections, it shows them in a easy-to-use graphical user interface. In fact, we’ve seen this before if you think about the kind of person who used an Apple IIe versus those who flocked to a Macintosh.

Why Cloud Computing 2.0’s Time is Now

VCRs got replaced by DVRs and streaming. Windows, not DOS, put a computer on every desktop and in every household. You don’t "Lycos" or "Alta Vista" anybody – you "Google" them. We’ve seen this pattern time and time again, where a first generation product creates a new, unimaginable marketplace but it always gets improved upon.

1.0 is rarely endgame. What we are sure to see in years to come, and maybe even sooner, is an improvement in the features available in public cloud. Per minute billing is a great start, but more flexible instance sizes, live vertical scaling without a reboot, better and more consistent performance, and improved ease-of-use through graphical tools are among the features that Cloud Computing 2.0 promises to bring us.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like