Keeping Cloud Workloads on the Right Flight Path
December 20th, 2011 By: Industry Perspectives
Billy Cox is a Director of Cloud Software Strategy with Intel’s Data Center Group. Since joining Intel in 2007, Billy – who has 30+ years of industry experience – has been leading the cloud strategy efforts for the Intel Software and Services Group.BILLY COX
In the course of my work, I often count on airplanes to get me from city to city, and from country to country. I can fly with confidence on any airplane I board because I know there are mechanisms in place to ensure that all commercial aircraft follow certain policies. I don’t have to stop to verify that the plane has the right set of policies in place—those are all set at a higher level. In addition, the setting of the policies are done by a small group of folks and then propagated to a larger community that actually implement and measure conformance—aka scale.
Policies Set At the Higher-Level
This is similar to the way workloads fly through a cloud environment. In a large cloud environment, which might get hundreds or even thousands of virtual machine requests a minute, you couldn’t realistically assign policies for individual workloads to direct each to an appropriate compute pool. Instead, you set higher-level policies that dictate where certain types of workloads will run, and then let the cloud management system do the rest of the work.
The most common example is the selection of the zone or geography. When you submit a workload you tell the cloud management software where you want the workload to run. You might make this choice because you need to keep the activity within a certain geography or simply to keep it close to its data.
If you have workloads that need to run in a highly secure environment that is compliant with Payment Card Industry (PCI) standards, you would like to put policies in place to direct those workloads to trusted compute pools that are designed for PCI compliance. It would then be up to the cloud management software to implement and maintain adherence to that policy.
To illustrate how this could be done, we’ll use the OpenStack cloud operating system as an example. This open-source software, which was spearheaded by Rackspace and NASA, is a popular option for building and managing highly scalable clouds.
As a first step, you would need to add a “PCI trusted zone” to the OpenStack environment. These zones are logical groups of Nova controllers and VM hosts. This PCI trusted zone would implement the necessary attributes to provide compliance with the PCI recommendations. One of those attributes would be the use of an approved and appropriately configured server. Selected technology such as Intel Trusted Execution Technology (Intel TXT) can be used to give you the assurance that the virtual machine managers have been measured and checked against a known, trusted list.
Intel and Openstack Collaboration
Intel is currently working with the OpenStack community to develop a policy-based scheduler for inclusion in OpenStack. This scheduler will allow the OpenStack user to set policies that dictate where workloads must be run. In addition, this capability will also provide the means to provide an indication of compliance to the requirements of the trusted pool.
In addition to trust as a form of policy, Intel is also working with the OpenStack community to enable policies that cause workloads to be assigned to server pools in a manner that makes the best use of power and cooling resources.
Managing Large Cloud Environments
In a huge cloud environment, you couldn’t realistically do any of this on a manual, VM-by-VM basis. But with the right management capabilities at your command, you can establish policies at a high level and then let the cloud management software do the rest of the work for you.
Along the way, you can feel confident that thousand of workloads are flying through your cloud in a controlled manner, just the way airplanes fly through the air.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.