Jason Meserve is solutions marketing manager for CA Technologies Service Assurance portfolio, which helps ensure the performance, availability and quality of IT services as infrastructure and cloud options evolve.
The IT groups in most organizations serve multiple “bosses.” First, there are the business owners that rely on business applications to drive revenue and improve productivity. Second, there’s the end-user – be it an external customer or internal employee – that demands an exceptional end-user experience. What these groups have in common is they just want the application to work and work flawlessly. They are not concerned with how or how much it costs, they just want it to work and work well. But there’s a third boss that does care about costs: The CFO’s office. They too want things to work, but they’d like to keep budgets in check.
While IT budgets have remained relatively flat, the demand for IT services is growing sharply, driven in part by the increased use of mobile devices and the consumerization of IT. In addition, today’s IT organizations are tasked with managing an increasingly complex infrastructure comprised of physical, virtual, cloud and mainframe systems all which need to be optimized to deliver today’s business-critical applications.
Performance Tied to Demand
Poor performance is often related to increased demand for services. Sometimes it’s a sudden spike in demand caused by a “Black Friday” event, while other times performance problems creep up over time as demand for service slowly grows until it reaches a tipping point. In the either case, the root cause of the performance problem ties back to not understanding and proactively managing computing capacity on an ongoing basis.
Previously, IT got around this by over provisioning infrastructure for peak demand. This is a very costly way to manage a data center. No organization can afford to have lots of idle servers sitting around eating into the bottom line. Moreover, while virtualization has helped improve physical server utilization rates from single-digit rates, the average utilization of a virtualized server is still in the 20 to 30 percent range, meaning systems are still underutilized.
While monitoring tools such as application performance management can warn of system slowdowns and impending disaster as certain thresholds are met or exceeded, IT still needs a way to cost-effectively address the capacity issue without increasing risk to the business. The days of throwing hardware – and therefore money – at the problem are gone for most shops. IT must be able to get the most value for its dollar while minimizing risk and continuing to meet the expectations of end-users and the business.
Ensuring Application Performance While Reliably Predicting Future Growth
In order to keep up with the increasing demand for IT services, and deliver an exceptional end-user experience while keeping within budget constraints, IT organizations must be able to proactively identify, diagnose and resolve performance problems by monitoring all transactions. It must also be able to assess current capacity requirements while reliably predicting future growth without having to overbuild the system and spend needlessly for hardware and cloud services that may go unused.
Technologies such as application performance management (APM) and capacity management can help IT organizations reduce risk and keep a close eye on business-critical application performance while ensuring that capacity needs are right-sized for today’s needs and future growth.
A modern APM system delivers 360-degree visibility into all user transactions across a hybrid-cloud infrastructure – physical, virtual, cloud and mainframe—to understand the health, availability, business impact and end-user experience of critical enterprise, mobile and cloud applications. With a good APM deployment, organizations can proactively identify, diagnose and resolve problems throughout the application lifecycle to put themselves firmly in control of the end-user experience and optimize the performance of critical, revenue-generating services.
What are the Benefits of Capacity Management?
Capacity management provides predictive analytics that allow users to simulate changes to application and infrastructure components in order to help ensure application response time goals are met once the application is moved to the production environment. Capacity management provides prescriptive insight into the infrastructure needed for optimal IT operations including support for both new workloads and workloads that change over time. Tangibly, this prescriptive insight not only helps to right-size the application environments on release, but ultimately helps to reduce the number of performance issues often incurred in the roll out of a new application or release.
The combination of these technologies works in two ways:
First, when a problem does occur, APM will alert IT operations of the incident. For example, let’s say a server has exceeded 80 percent utilization. In this scenario, IT knows it needs to take action and can dig down into the server workload data. Capacity management can then be used to run what-if scenarios for the affected server and entire application delivery chain, providing a plan that addresses the machine causing the alert in APM. The fix could be something as simple as upgrading the underlying hypervisor to a new version, which will change the workload characteristics and bring the server’s performance back into line.
A second scenario for the combined technologies involves planning for future growth, right-sizing the environment to control costs and mitigating risk to the business. In this case, the capacity management tool uses APM performance data from production environments to enable customers to perform workload scale-out analysis. IT organizations can perform scenario analyses simulating a variety of load patterns across a variety of architectural options so the best suited environment can be easily ascertained. Different hypervisors, host hardware configurations, VM configurations, operating system versions, and so forth can be analyzed to determine the optimal solution based on the cost and performance requirements of a specific application. This helps IT to optimize the production infrastructure with the right system configurations based upon the planned workload. It also can determine workload sharing opportunities, new procurement requirements, and cloud burst capacity needed. In short, it helps IT reliably predict its future capacity needs based on real-world data, not crazy rules of thumb.
A Right-sized Infrastructure that Meets End-user Expectations
The unique combination of APM and capacity management helps IT organizations better handle the demands of the business and end-user expectations while keeping costs in check and mitigating risk by building a right-sized IT infrastructure. During an incident, APM can warn of potential problems due to overtaxed systems while capacity management can be used to help create a solution for the problem and ensuring business-critical and revenue-generating applications run efficiently.
By leveraging the performance management capabilities of APM along with the predictive ability of capacity management, IT can also benefit from:
- Reducing costs of IT infrastructure with safe, reliable consolidation and virtualization;
- IT investments aligning with business needs for IT services;
- Accurate, reliable decision support for IT investments;
- Improving productivity by preventing performance problems from occurring;
- Improving mean time to repair when problems do occur;
- Gaining 24x7 monitoring of all end-user transactions;
- Ensuring an exception end-user experience;
- Balancing of application performance, cost and risk;
- Accurate prediction of application behavior from IT changes; and
- Peace of mind from reliable capital expenditure planning, service level commitment and IT service delivery.
This combination of tools also helps build more accurate models to reliably predict future capacity needs, allowing the IT organization to run an unlimited number of “what-if” scenarios to understand how its application will perform under different conditions. By using capacity planning models based on real performance data, IT can find the right combination of hardware, virtualization and cloud services to meet the needs of the business while softening the blow to the bottom line. This helps IT mitigate risk to the business because it can reliably predict that load increases – whether seasonal or otherwise – will function properly in the environment because the capacity planning models are based on real data.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.