Jerry Gentry is Vice President, IT Program Management at Nemertes Research
I used to work for a company that did financial transactions. Every week the owner of one of the main transactions reported greater than 99.5% uptime on the application. A new CIO came on board and shortly afterward there was a significant outage on the network. In the weekly meeting the application owner again reported greater than 99.5% up time and the new CIO asked how that could be.
You can see where I am going. It didn’t make sense to the new CIO that we would report an application being up when no one could reach it. From the customers’ perspective, it was down. That started a new tradition of measuring the end-to-end performance of critical applications. That same desire has found its way (with varying degrees of success) into several companies I have worked for. Many companies have developed tools that aid in that effort and those tools have evolved to a pretty good degree of sophistication.
I am a big advocate of application modeling. That is the process of capturing key end-to-end transactions for an application and looking at the time it takes for each component in the transaction chain to do its part. The transactions can be simple or complex, but once you have captured them and created a baseline for what is typical, a whole new world of opportunity arises.
In most cases the tool takes the view of the end user. The most common method of capturing the transaction is with a protocol analysis tool usually inserted at the client space. In my previous article I noted that becoming effective in this area requires leveraging technology and resources which are not typically in the data center space. In particular someone with the ability to insert the capture tool in the right location and also interpret what the packets mean is critical for this to work. Yes, we are data center people and our focus is on utilizing the expensive resources in the most effective way. Application modeling is an important method to show the effectiveness of your architecture and designs.
In doing the analysis, start with the overall time and a confirmation that the captured transaction is representative and normal. Then start to break it down into its constituent parts. The results from this approach will be eye opening. Not only will you see the traffic between the client and the application server, you will see other servers which participate in the overall transaction. Servers which provide DNS, back-end databases and other services will show up as part of the transaction.
The better tools will provide nice graphical representations showing the turn-arounds and the names of the devices involved from beginning to end. It is during this process that you will be able to spot anything the looks unusual, like a call to a database that goes into a wait state for more than a few seconds, or previous unknown dependencies among systems.
Once you have a baseline for overall performance, use it as the basis for a repeatable process. That way as you make changes, like virtualization, in your data center, you can re-run the same analysis in the new environment and see how results differ. If you make the modeling a prerequisite for any new, major application being deployed then you will accomplish two things. First, you will be able to predict how it will perform when it is deployed. That is always an interesting discussion since most application development and testing is done in a controlled lab with very little consideration for latency. Second, as noted, you now have a baseline for that application for comparison in the future. Each application you model provides an opportunity to have a discussion with the business about their transactions. That gets them to see you as someone who is adding value in meeting their business objectives.
To get more useful data center management strategies from Nemertes Research download the Q1 2012 Data Center Knowledge Guide to Enterprise Data Centers.