Chris Heyn is the General Manager of KEMP Technologies Italy. He lives in a small village called Arcene about 40kms from Milan. For the past 14 years Chris has been involved in business development for ICT companies looking to expand their activities into Italy and the eastern Mediterranean as well as the Middle East.
There is no time like the present to give the data center a good cleaning, or in other words, “optimize your data center.” Let’s take a look at how you can dust off the old appliances and make sure your data center is running as efficiently as possible.
Cost Model is Changing
Traditionally, each application that users need to run has been given its own server to host it. From a network application management point of view, this makes perfect sense as by physically separating the servers they are easier to manage and control, this makes sense or does it?
Actually, there is a fundamental problem. According to IDC, Gartner and even local sources in the United States, Japan, Italy and South Africa, the cost of running each server hosting an application is getting more and more expensive. Understand here we are talking about not buying the server or even the application that runs on it, but the cost for running it — electricity, cooling (air conditioning or other means), fire prevention. When collected together, the cost of running these servers is frightening.
What is the cost, or “the damage” as they say in the East End of London, of this approach? Here are the numbers for running each server on an annual basis:
- IDC – $6,551
- Gartner – $5,248
- Skeptics – $3,275
While the exact figures estimated to keep each server running differ, the analysts do agree that it is a problem. Here are the 3 principal causes of the problem using IDC’s report as an example:
1) Physical server sprawl.
As a consequence of the huge numbers of installed servers, staffing costs on systems maintenance have risen 600% to over $120 billion annually, and the cost to power and cool installed servers has more than tripled from $2 billion to $10 billion per year during that same period.
2) Overprovisioning and underutilized assets.
a. While the rise in the number of installed systems has been dramatic, equally concerning is the low utilization of these servers. Most applications consume a fraction of a server’s total capacity, averaging 5–10% utilization on a typical x86 server.
3) Lack of integrated management tools and service management frameworks.
Customers have multiple, disparate systems management tools in place that have both unique and overlapping functionality. Many customers have under-invested in systems management and automation tools relative to the investments they have made in systems infrastructure. This has meant that many data centers employ manually intensive processes, including the integration of service management frameworks, resulting in greater burdens on staffing.
Users probably recognize all these problems. Fortunately, there are three smart tips to correct the problem:
Destroy the slave to the server myth. The application rules!
The server is dead long live the application. Yes we have moved on up. Traditionally server optimization was all you could achieve at the IP layer (OSI layer 4) now users can manage and manipulate performance at layer 7, the application itself. For a data center manager, this means that rather than amputating a patient’s leg to remove a verruca, you can treat and remove it with a pair of pincers saving the leg and even the foot.
Virtualize – In and Out of the Cloud
Virtualization is the new creed, the new approach to application deployment. Did you notice that this is application not server deployment? It is an application deployment because we are now in a position where application performance is the key not the server. Compared to only a few years ago, the choices for IT managers, and CTO’su, who are just responsible for making all our web and IT stuff work, has been revolutionized. Users may not accept this but times could not be better if you get to be a little bit smart. You must not compromise and insist on application load balancing.
Once data center and IT managers have done their due diligence, received quotes from the cloud providers and considered the cost savings to the company it is time to consider the need for load balancers.
the cost savings to the company are looking more than interesting. Each one is well detailed explaining necessary server capacity, firewall and application security appliance functionality, but you have one last question, do we need to use load balancers?
The answer is yes! The latest application delivery controllers (ADCs) or load balancers will ensure that your network servers run at peak efficiency thanks to their ability to direct traffic to each server based upon its “network performance health check” at both layer 4 and layer 7. Regardless of whether users are running applications in their own network or in the cloud,network load balancers are a crucial component of the network infrastructure.
Once the load balancers are in place, users can relax knowing they have thoroughly “cleaned” their data center. A clean data center is secure and efficient data center, ready to handle the workloads.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.