Why Your Cloud May Be Getting a Lot Smaller

It's a cloud world out there. Now, find out why your cloud may be shrinking.

Bill Kleyman

June 13, 2014

4 Min Read
Why Your Cloud May Be Getting a Lot Smaller

You know what they’ve been calling it? “The miniaturization of cloud computing.” New types of platforms are being deployed at remote offices and within data centers which require less infrastructure and more logical controls. The entire data center landscape is being re-evaluated for better performance and improved resource utilization. Already technologies like those from Nutanix are actively trying to redefine the resource utilization standard for cloud.

Very soon, logical technologies will become so powerful, that physical infrastructure will be used only for resources at a minimal scale.

Before we get into the commodity conversation, however, it’s important to understand why and how the cloud model is becoming a lot more converged:

  • Rethinking the physical infrastructure. We’ve come a long way from mapping one application to one server. The modern infrastructure has become a lot more compact with the advent of powerful converged platforms. Even rack-mount servers have become exceedingly more powerful and efficient. As the data center and cloud model continue to evolve, you’ll begin to see micro-cloud environments pop up. In using blade systems or other converged platforms, you’re able to deliver every cloud component under one roof. Nutanix, Simplivity, Scale Computing and several others are redefining the concept of a unified platform. Moving forward, the data center will become a lot more efficient and a lot more compact. 

  • Incorporating logical controls. Just as the physical platform and underlying hardware resources have evolved – the logical layer has progressed as well. Software-defined technologies and advanced levels of resource virtualization are literally creating the virtual, or software-defined, data center platform. In using bandwidth and resources more economically, data centers are able to replicate data between geographic points much more effectively. Also, these logical controls introduce the concept of a “commodity cloud.” Basically, you’ve got all logical controls with commodity hardware (network, storage and even compute) at the back-end. The great part here is that logical controls allow you to manipulate workloads and data without depending on the underlying hardware.

  • Software-defined technologies. Let’s get a bit more specific here. There is now a software-defined acronym for pretty much every component in the data center. The entire modern data center can be seen as software-defined component. That said, storage, network, compute, security and cloud computing can all be abstracted into the software-defined layer. Basically, this means better monitoring, management and control capabilities. This also means a smaller cloud footprint. The great part about software-defined technologies is that agnostic nature of the software. Soon, it won’t matter the type of hypervisor your running or even what type of underlying hardware you have. The software-defined controls are capable of managing your entire cloud and data center platform from a logical layer.

  • New types of convergence (creating micro-clouds). The concept of convergence is really taking off. Many organizations are deploying purpose-built converged systems for a variety of purposes. A Nutanix system can host virtual applications being delivered via the cloud while HP’s Moonshot chassis is helping you process numerous parallel, cloud-based workloads. The point is that converged platforms are a very real piece of the modern data center and are definitely being widely adopted. Their efficiency, ability to scale and resiliency make these systems very attractive. Plus, the right type of converged infrastructure can be very cost effective.

  • Better data distribution. Our ability to scale our data now versus what it was just a few years ago is hard to measure. As we lay down larger fiber networks and optimize wireless infrastructures, our ability to deliver rich content over great scale will only improve. This type of data distribution over a powerful logical network allows data centers to be more distributed and smaller. It also allows the data to live closer to applications, users and required resources dynamically. Still, given the trends, it seems that there will only be more data and information to process. Open-source platforms for Big Data management are also allowing organizations to control and quantify valuable information across the cloud.

Remember: even with the proliferation of cloud computing all of these technologies are still tied to some type of physical resource. We’ve seen dynamic growth around network, compute and storage. Now, administrators are actively looking at ways to move into the hardware consolidation direction while still creating an agile and robust cloud platform. This means deploying intelligent physical control methods and next-generation data center hardware pieces. Through virtualization and logical controls the data center is evolving into a distributed model capable of delivering powerful types of workloads and rich content.  The user, organization and our industry is continuing to evolve with new types of demands. To keep up with these trends data center and cloud technologies are looking at both logical and physical optimizations to make their cloud model a lot more extensible and consolidated.

About the Author

Bill Kleyman

Bill Kleyman has more than 15 years of experience in enterprise technology. He also enjoys writing, blogging, and educating colleagues about tech. His published and referenced work can be found on Data Center Knowledge, AFCOM, ITPro Today, InformationWeek, NetworkComputing, TechTarget, DarkReading, Forbes, CBS Interactive, Slashdot, and more.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like