E-computing: The Next Steps

To meet demands for fast time-to-market and efficient operations a uniform computing environment that spans on-premises data centers and public cloud is needed.

Todd Christ is an Intel Enterprise Architect.

The World Wide Web, an overlay on the interconnected networks we know as the Internet, revolutionized the way people shop and buy. Now, a similar overlay we refer to as “the cloud” is revolutionizing the way information technology—both the technology itself and the people and organizations that apply it—is delivering innovation to businesses scrambling to survive and thrive in the chaotic, fast-moving business world enabled by the Web.

Like e-Commerce, cloud computing is driven by business needs and enabled by advancing technology and shifting attitudes. The first ventures into cloud computing were simply applications of existing approaches and technology in a new environment, but we’re now at a tipping point where technology designed from the ground up to run in the cloud is changing the way we do computing.

The Cloud as a Platform for Innovation

Cloud computing has evolved. Platform-as-a-service (PaaS) and Software-as-a-service (SaaS) offerings have opened up new possibilities for application deployment, and they let IT infrastructure teams respond instantly to demand for new servers simply by spinning up complex architectures in the cloud. Application development organizations are now allowing cloud providers to manage the full software stack needed for the application to run. Hybrid cloud represents yet another deployment environment adding “one more pork chop” to an already full plate. In our e-Commerce analogy, it’s similar to early efforts that put catalogs online but required the consumer to call to place an order. It worked great for the mail room but didn’t lighten the load where the orders actually got processed.  Sometimes, due to regulations, that mail room needs to be both in "the cloud" as well as "on premises" to ensure secure data models.

What’s needed to meet escalating demands for fast time to market and efficient operations is a uniform computing environment that spans on-premises data centers and public clouds. Not two clouds, but a unified computing fabric sharing data, services, and applications. Applications and architectures should be deployable anywhere—wherever business needs and economics dictate. And if business needs or economics change, they should be easily re-deployable to a better location. The eventual execution environment should be transparent to application developers, so they can focus on innovation rather than plumbing. The environment should support DevOps and continuous delivery programs, so innovation can flow quickly and easily to the marketplace. The computing environment should enable resilient, highly available applications. And it should let us scale up or down instantly to respond to fluctuating business demands.

This computing nirvana is emerging today in the form of hybrid clouds—on-premises private clouds seamlessly linked to one another or more public clouds and based on technology purpose-built for cloud computing. One key characteristic of hybrid cloud is that the infrastructure is abstracted, so application developers don’t have to provide for or even know where the app will eventually run. This is enabled via several technologies that together describe a new generation of “cloud-native” applications.  Hybrid cloud platforms now need to be more intelligent to place workloads onto the appropriate hardware that best matches the application.

Application Platforms

Application platforms fill the gap between the application itself and the virtual machines (VMs) so easily provisioned and deployed in the cloud. They are cloud services embodying data bases, Web servers, middleware, utility, and orchestration software that insulate the app from the details of the infrastructure and create a consistent development and deployment environment for app developers. DevOps processes can upload an application to the platform, which automatically handles details like capacity provisioning, load balancing, easy deployment to multiple cloud locations, and application health monitoring. For the developer, it makes the world simpler not having to worry about infrastructure. For the business, it speeds time to market and creates consistency and flexibility.


Unlike the VMs of hardware virtualization, which contain an executing instance of a complete operating system, containers use operating system (OS) virtualization to wrap applications and their dependencies into lightweight packages that contain everything needed to run the app. The containers running on an OS instance share the kernel, so they use less memory and CPU. As a result, they fire up faster and run more efficiently. Like VMs, they are isolated from each other and from other parts of the execution environment, so they can’t clash. Dependencies like libraries and configuration files are packaged and deployed in the container with the application code, so they will run on any system supporting the containerization method as long as the binaries are compatible with the underlying architecture. Because they are self-contained, deployment is easier, faster, and more reliable.  This removes much of the overhead that is experienced when using a VM where you must go through the VM boot process, BIOS, and underlying full-sized operating system.  In essence, you get to “the work” faster.

For application teams, one of the most enchanting things about containers is that existing applications can be “containerized” with little or no code changes. So when the fit is right, containers can offer a good approach to application portability in a hybrid cloud.  From a security perspective, containers also provide a smaller code footprint, which means there is less ‘attack space’ for malicious hackers to access.


Application portability, however, is only one of the promises of hybrid clouds. We’re also promised agility to meet the time-to-market needs of demanding businesses. The vehicles for achieving agility are DevOps processes and continuous delivery programs that establish a delivery pipeline that lets development teams flow app enhancements through test and integration and into production quickly and easily.

This really only works, however, if the application is segmented into pieces small enough for a small team to work on and independent enough to avoid breaking other application components when changed. Large monolithic application structures extend testing and integration effort, so much of the history of software engineering has been about finding ways to separate, abstract, and enable reuse of software functions.

Microservices are the latest, cloud-native mechanism for doing that. They’re small, independent modules that provide well-defined application services ideally corresponding to business functions. Done right, microservices enable an application architecture where changes and enhancements flow easily through a DevOps pipeline to achieve the agility and fast time to market we seek.

In the cloud world, microservices are embedded in containers. The containers that compose an application or provide common services to a number of applications can be organized and managed as a cluster residing on a single server or distributed across many servers, and they communicate with each other using HTTP and standard interfaces.


Despite some ambitious writing and talking about containers and microservices, there’s nothing inherent in either technology that makes them fault tolerant, secure, or instantly scalable. These are exercises left to architects and developers. But just as OpenStack enables automated deployment and management of virtual resources in IaaS environments, container orchestration frameworks like Kubernetes and Docker Swarm provide tools to achieve the resiliency and on-demand scalability the cloud promises by enabling automated deployment, scaling, and management of containerized applications.

Cluster management software lets you orchestrate deployment and redeployment, scale up when the load increases, and recover when faults occur. These features must be planned and implemented by development teams, but the provided tools and components readily available in the open source community offer a clear blueprint to help organizations implement the hybrid cloud nirvana we described earlier.

Toward a Better Cloud

The fundamental needs of an application have not changed—functionality, performance, security, reliability, and the flexibility to change. But cloud computing is providing a new set of technologies and processes to let application developers achieve those in a way that enables the agility businesses demand. Few organizations will make the transition from traditional application structures to cloud-native applications in one step. New cloud-bound apps are certainly candidates for cloud-native development. But most organizations will undertake an application rationalization process to identify apps that will remain in place, those that can be shifted to an application platform or containerized to execute anywhere in the cloud, those that might be redeveloped to take full advantage of what the cloud enables, and those that can be retired or replaced with SaaS offerings.

I suggested at the beginning that hybrid clouds would do for computing what e-Commerce has done for shopping and purchasing. E-Commerce is enabled by technologies like Internet search, a standard shopping cart motif, secure online payment systems, and integrated, tracked shipping that work together to provide a secure, convenient experience that helps consumers find the right product and pay the right price.

Hybrid clouds are doing a similar thing for IT. Technologies like virtualization, containerization, and orchestration frameworks are working together to enable a seamless experience for both developers and IT operations staff. Commercial services from public cloud providers link with private on-premises clouds to let IT choose where to deploy based on both business needs and technical considerations. The hybrid cloud solutions also help to shift the dull, tedious work, like data backups, and hardware reliability to the vendor supported model, and the sysadmin can now shift focus to more meaningful and rewarding activities in the data center.  IT organizations are adopting new attitudes and processes that break down traditional barriers between development and operations organizations to speed innovation to the market. The end result is a new wave of application and business innovation that can make businesses more agile and more competitive.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.