These days, in one way or another, many organizations are finding ways to leverage the many benefits of cloud computing. Over the course of the last five years the cloud model has evolved to support a number of new types of use cases, users, and applications. Through the evolution of the cloud we saw a number of different types of models emerge. Some of the most popular ones are:
Now we see a new type of trend emerge. Organizations have grown with the capabilities of cloud computing and are creating better deployments around the right type of model. Probably one of the most popular cloud architecture revolves around the hybrid cloud infrastructure. There are better interconnectivity capabilities, and organizations are able to distribute the environment much more effectively. These improvements in bandwidth, storage, network, and compute allow public and private data center resources to be shared more efficiently.
Let's create a hypothetical scenario. You’re an organization of a few thousand users. You see sporadic shifts in user count and data loads because of the nature of your business. Basically, there are times when resource constraints and uptime become serious concerns for your organization. For the most part, your data center is privately held with a variety of virtual workloads including:
- Virtual applications
- Virtual desktops
- Mail servers
- Other hosting servers
Now a decision has been made to extend your existing infrastructure into a public cloud. This doesn’t necessarily have to be AWS or Azure. In fact, many organizations select popular data center providers to build their own cloud model. With that in mind and the goal set, what are the right steps to create a hybrid cloud environment? What are the right ingredients to help distribute data center resources and create an even more robust infrastructure?
Although not all-encompassing, these are some of the recommended steps to consider when building out your own hybrid cloud platform:
- The data center or cloud provider. What you’re basically doing is extending your existing platform into a cloud model. One of the first things that your organization must do is conduct a Business Impact Analysis as well as a Cloud Readiness Assessment. These two planning projects allow your organization to understand existing workloads, which need to be extended into the cloud and how it will impact your business. A Readiness Assessment allows you to further understand if your applications, users, and even data sets are ready for a cloud migration. Based on these analyses and an ROI report, you’ll have a few options. Build, lease, or cloud cloud. The exact answer will revolve around the findings in the respective reports.
- Selecting your hardware. Given that you’ve now completed your analysis and know where you’re deploying your cloud, it’s time to look at a variety of hardware options. Server, storage, network, and compute platforms have come a long way. New converged platforms are the primary reason that the cloud is getting a lot smaller. There are direct benefits between rack-mount, blade, and converged platforms. It’s critical to look at purpose-built systems as well. If you’re looking to process numerous parallel workloads for a specific task, you’d probably look at HP’s Moonshot chassis. However, if you’re deploying a branch cloud location with virtual applications and some data sets, maybe a Nutanix platform is the right choice. In other instances, such as a much larger and more powerful deployment, you may need more horsepower. When that scenario arises, Cisco UCS or other powerful blade-based systems are amazing hardware options. Their level of scale and integration with the virtual layer make them highly efficient systems.
- Creating a virtual platform. The modern data center is now being defined as the software-defined data center platform. Let’s be honest, if you’re moving to the cloud, you’ll need to look at logical and virtual controls. This starts at the hypervisor and can move all the way into managing Big Data on a Hadoop cluster. In between you’ll have virtual security services, logical management and monitoring controls, and software-defined technologies. The really cool part about the modern data center are all of the amazing logical controls that we have now. Network, storage, compute, and even the cloud can fall into the software-defined category. When creating your cloud platform, make sure to look at these systems to help you scale, be much more efficient. and improve cloud resiliency.
- Integrating replication and distribution mechanisms. A big part of a hybrid cloud is the ability to replicate and distribute data. First of all, it’s important to understand what you’re replicating and to where. Many organizations deploy hybrid cloud platforms to help get applications and data closer to their user. Others use a hybrid cloud to control bursts and branch locations. Regardless, it’s important to know how data is being moved, backed up, and how it’s being optimized. Data replication can be a tedious process if not done properly. That said, it’s important to take security into consideration as well. Your data is a critical asset and it must be secured at the source, through the route, and at the destination. Fortunately, virtual security appliances and services can help make this process a bit easier.
- Incorporating automation and orchestration. Part of the beauty of a hybrid cloud is the ability to set automation tasks and watch it all go. Resources can be provisioned and de-provisioned based on demand, management can be a lot more proactive, and your ability to control various components of your hybrid infrastructure can all fall under one management layer. Open-source management systems allow you to easily replicate data center resources into a hybrid cloud model. Technologies like CloudPlatform, OpenStack, and Eucalyptus all provide direct extensibility into a hybrid cloud model. Furthermore, automation tools allow for easier replication and control of critical resources. As you build your hybrid model, make sure to look at cloud-based orchestration and automation tools for help.
- Balancing your workloads. Load-balancing technologies allow you port users, data points, and even applications be set to appropriate data center resources. For example, your load-balancing platform can be intelligent enough to point a user closer to a data point while controlling resources and application access. Next-generation load-balancing platforms are a lot more than just load-balancers. They provide aspects of next-gen security, a variety of virtual services, can act as an application firewall, and control a variety of on-prem and off-prem resources. Most of all, these platforms can be virtual or physical.
- Management and control. One of the most important pieces to your hybrid cloud platform will revolve around monitoring, management, and control. Staying proactive and catching challenges before they become real issues is a big piece of running an efficient cloud environment. Remember, you now have an ecosystem of technologies all working together to replicate resources between a private data center and one that is now your cloud-based environment. Having that single pane of glass allows you to delegate controls and permissions and have direct visibility into every aspect of your extended data center model.
Building the right type of hybrid cloud platform will require a bit of planning and preparation. However, when done right, a hybrid cloud becomes an extremely powerful extension of your existing infrastructure. With use cases spanning DRBC to seasonal workload bursting, hybrid platforms create great ways to utilize resources over a distributed plane. Next-generation load-balancing technologies now allow for the seamless transfer of users and workloads between vast data center points. All of this helps improve both data center and business operation processes.