Skip navigation
Why IT Automation Matters Today

Why IT Automation Matters Today

Prior to the rapid growth of virtualization, applications were easily configured because a company’s web server, database server and middleware were all in one place. That's no longer the case today, and automation is key for managing the complexity.

Justin Nemmers is the Director of the US Public Sector Group at Ansible.

The benefits of IT automation are vast. Not only does it free up developers and sysadmin schedules to focus less on repetitive, administrative tasks and more on providing value to the business, it improves workflow and quality of life; yet many organizations struggle to adopt IT automation because their environments are too complex.

For example, consider the following scenario: Your development team has just completed weeks of work, delivering their masterpiece - a ready-to-deploy application - to IT, but it doesn’t work once IT deploys it. Why? The network port used by the development team must be opened on the firewall so end users can communicate with the software. But IT changed the firewall rule and forgot to tell development. No procedure or policy was created to capture all of the changes necessary to successfully deploy the app, and now you’re looking at an unnecessary delay, one that could have been avoided altogether if a better structure was in place to eliminate disparate factors. What’s worse, once this error is discovered, it doesn’t mean that it has been corrected for all future releases – the same roadblock may happen again and again. It’s a vicious cycle.

IT departments struggle to manage thousands of configurations and hundreds of applications, often with highly separated teams working in silos. The reasoning behind this is simple: the teams responsible for developing apps are typically not integral parts of the teams tasked with deploying them, and needed changes just don’t get conveyed back to the development team.

In the past, applications and hardware were closely connected. Apps came from single vendors complete with their own hardware and software, backed up within your environment. Hardware was loosely standard-based, which meant organizations chose a vendor and were then tied to that vendor for their hardware and software. Even though it was difficult to change vendors, it could be done if you redesigned nearly everything in your environment. As time went on, however, the tight coupling of hardware and software began to loosen.

Applications for Any Operating System

Hardware became commoditized and open standards-based architectures allowed software providers to build their own operating environments. Suddenly, software developers could develop applications for any operating system, regardless of the hardware. At the same time, companies were granted more freedom as they no longer had to rely on a single vendor for their hardware and software needs. However, as is often the case, the introduction of more choices brings about further complications to a once simple, straight-forward process. Some would deem this the tyranny of choice.

While hardware could be bought from anyone and organizations could choose their own operating center and applications, they also now had to manage all of these pieces in-house rather than rely on their hardware providers for support.

With the rise of virtual environments, it was no longer possible to point to one server and easily identify what it did. In this new landscape, the data center continuously grew, and managing it fell on the IT department’s shoulders.

Though there are a number of tools available to help manage these more complex and virtual IT environments, they are often incomplete. When these tools were built, applications were easy to configure because a company’s web server, database server and middleware were all in one place.

But today, application workloads are more widely distributed, and IT applications and configurations are more complex. Single point-in-time configuration management alone is simply no longer adequate.

Think about it like this: When you come home from the grocery store, there is a precise and specific set of processes – an orchestrated workflow – that needs to happen in order for you to get from inside your car to your sofa.

First, you pull into your driveway. Then, you stop the car, open the garage door, open your car door, shut the car door, walk to the house, unlock the door, etc. This orchestrated set of events needs to occur the same way, every time (i.e. You can’t open your door before you stop the car.).

Similarly in IT, there has historically never been a single tool that could accurately describe the end-to-end configuration of each application in a particular environment. Though some tools could describe the driveway, for example, they could not also accurately describe how the car interacts with the driveway, nor how the key can open the door (it’s height and width, and whether the handle is on the left or right side, etc.). This sequence of seemingly basic tasks is analogous to the process of developing and deploying any application in the modern IT landscape.

The key is helping IT organizations understand the big picture of how these hundreds of configurations, applications and teams of people can successfully work together. It’s a piece of strategy that delineates those IT teams that will successfully transform and adapt to rapidly changing technology, and those that will continue to spend too much money struggling just to keep their heads above water.

Ideally, development teams would create a playbook that they deliver alongside their application so that IT could then use it to deploy and manage said application. When changes to the playbook are made, they are sent back to the development team so that the next time they deploy the application they are not reinventing the wheel.

This eliminates the massive back and forth and miscommunication between the two teams, which also reduces delays in deployment. By automating this process, there are fewer human errors and better communication and collaboration overall. Companies can save money, compress their deployment time and time between releases, and validate compliance frequently and automatically. It injects some agility into traditional development and operations methodology.

Once a playbook has been created for the first deployment, IT departments already have a proven roadmap for how to do it right the next time.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish