Skip navigation
HPE headquarters, Palo Alto, California HPE
Hewlett Packard Enterprise headquarters in Palo Alto, California

HPE Wants to Give Data Center Automation a Fresh Start

Platform unifies automation scripts by DevOps and developers in heterogeneous data centers

Our theme this month is intelligent data center management software tools. Data center management technologies have come a long way, as companies find themselves having to manage ever bigger and more diverse environments. From using machine learning to improve data center efficiency to using automation to manage everything from servers to cooling systems, we explore some of the latest developments in this space.

When Hewlett Packard Enterprise announced its Data Center Automation Suite [PDF] a little over a year ago, it was with the promise of providing tools for automating provisioning, patching, and compliance across “the full stack.” On Tuesday, the company gave the idea another try, indicating that it’s learned some things about heterogeneous data centers over the past year.

HPE appears to be very mindful that data centers are already deploying open source automation tools such as Chef and Puppet. Now that more data centers are moving containerized environments into production, the tools used by IT or DevOps professionals and those used by software developers suddenly find themselves alongside one another.

According to Nimish Shelat, HPE’s marketing manager for Data Center Automation, the service will now work to absorb the automation scripts that both departments are using — and may still continue to use — into a single environment under a newly reinforced, unified portal. Mind you, DCA has been integrating Chef recipes and Puppet scripts already, but HPE wants to give the platform a fresh start, beginning with how it integrates into existing data center environments.

See also: Why HPE Chose to Ship Docker in All Its Servers

“Despite the fact that [data center operators] have investments in place,” Shelat told Data Center Knowledge, “they are realizing that, as the complexity and scale of their environment grows, the tools they have invested in are not enough. Some of them are not heterogeneous or multi-vendor in nature, and as a result, they end up with multiple tools they have to deal with to manage their environments.”

Same Tasks, Different Tools

Indeed, there are CI/CD platforms such as Chef and Puppet, container orchestration tools such as Kubernetes and Mesosphere DC/OS, application performance monitoring tools from New Relic and Dynatrace — all of which claim to provide some aspect of that “single pane of glass” for data center management and automation. There’s enough of these single panes of glass, it seems, that stacked end-to-end they could form their own skyscraper.

See also: Cisco Tetration Brings Data Center Automation to Legacy Apps

But as subcultures form within organizations around the use and maintenance of these individual tools, HPE argues, the job of integrating tasks across departments in an organization ends up being done manually. Carrying workloads across silos, remarked Shelat, introduces innumerable opportunities for human error.

“We have realized there is a common pattern,” he said. “Server folks tend to do provisioning, patching, and compliance; network folks tend to do provisioning, patching, and compliance; database and middleware folks are doing the same. When you talk to all of them, they want to bring automation into their lifecycles, so they can do things more quickly; and they all desire a standardized, consistent way of doing things.”

See also: HPE Rethinks Enterprise Computing

Above and Beyond

In framing the present objectives for Data Center Automation, Shelat painted a mental picture of an automation layer above the level of task-oriented automation, in which he placed Chef and Puppet. In this upper layer is the “standardized, consistent” method to which he referred: an oversight process comprised of flowcharts that may be visually assembled using a drag-and-drop development environment. Each of these flowcharts represents a broad function of automation, such as provisioning a service, patching an application currently in production, and validating a process for compliance rules.

Within each of these flowcharts, an automation process — which may include a Chef or Puppet script — may be incorporated as what he called an “atomic element.”

“We are not saying it will be one or the other; there will be certain lines of business and certain areas of IT that are adopting open source technologies like Chef and Puppet,” conceded Shelat, “or that are trying them out in a certain area. In which case, despite the diversity of investments that might exist in the environment, we can package up the automation that is created by Chef and Puppet, through their scripts and recipes, and adopt it and integrate it into the automation that’s delivered through Data Center Automation. You can do the design-level work at the Chef and Puppet level, but the execution can be triggered through DCA.”

Compared with Jenkins

Puppet and Chef are configuration management tools. A recipe in Chef, and a script in Puppet, specifies the infrastructure resources that an application requires to run as intended, in the client’s data center or on their cloud platform. A CI/CD platform such as Jenkins can stage these items as units in a pipeline of continuous integration — automation at one layer above configuration management. So HPE is evidently positioning Data Center Automation as an alternative to Jenkins. Instead of pipelines, DCA offers flowcharts that may be more familiar, or easier to digest, for IT admins who perceive automation as a somewhat less intricate process than a mammoth cluster of pipelines.

Jenkins employs a three-stage pipelining process; DCA now employs a four-stage process, categorized as “Design,” “Deploy,” “Run,” and “Report.” An automation task, or “flow,” can be tested and simulated in the “Design” environment. But once it is promoted to the “Deploy” stage, it’s capable of being triggered by another flow, or in response to a service request or an incident report, Shelat said. An audit log is being continually generated, and an end-of-day report lets admins see analytics reports on how flows are being triggered, and how they are performing in response.

One of these, he described, is the Return on Investment report, which estimates the amount of human work time reclaimed through automated responses, over a given period of time. Those reports can be aligned with goals that are declared beforehand, he said, similar to the way a Google Analytics report depicts how the phraseology and deployment of an online ad campaign meets with goals for revenue and viewership.

“When they prioritize the top things they want to automate,” said Shelat, “they are most likely the time-bound things, or the areas they tend to slow down the most. Provisioning of servers could take several days. Then they have an idea of how much time they will save, when they have this entire thing automated and integrated end-to-end.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish