Should You Build or Outsource Your Data Center?
February 5th, 2010 By: Industry PerspectivesJEFF HINKLE
The modern-day IT executive has become an integral part of any business. Systems are now vital to revenue generation, and the ever-increasing demands for online and near-instantaneous access have stretched budgets and staff. Maintaining a competitive enterprise is becoming more difficult every day.
One of the key ways to increase performance is to focus on your organization’s core value activities. These core activities produce the most value for an organization. Unfortunately, many executives lose sight of this and become engaged in “empire-building” or “tinkering” because something “looks like a fun project” or this “will really enhance my resume”.
The key difference is that when an activity is core to the business, it will generally receive top-level attention and focus. An IT department’s time should be dedicated to the activities that provide the most value and allow resources (such as software) to be focused only on those key areas. It could be a reservation and ticketing system for an airline; a portal that enables customers to control their finances for a bank; or a manufacturing control system for an automobile company.
To build or not to build?
It is imperative to outsource activities that are not core proponents of one’s business. This will free up time to focus on core activities. A prime example is when a company chooses to build its own data center.
Building a data center or worrying about the 24×7 staffing and operations of hardware, which requires upgrades year to year, are distractions from the core mission. Many organizations are not equipped to conduct operations at an enterprise level. A smaller data center may occupy several thousand square feet and is likely to be engineered to Tier 2 standards (as defined by the Uptime Institute) at best. Maintenance is likely to be half-hearted and fully outsourced, since this is not a core value proposition. The systems are less likely to be staffed 24×7, and the “number of nines” of operational excellence is reduced by one or two.
Expansion is difficult, costly and time consuming. Often the facility is “maxed out” from Day One, and obtaining new capital is difficult. This endangers new projects that require more space for gear.
Operating costs are almost always higher for smaller companies, who will not have access to bulk electrical rates that a large commercial facility might have, nor the sheer purchasing volume.
Making an investment in hardware in as little as 6 months (within a rapidly-changing environment) may not fit a company’s needs. Many companies find themselves stuck with systems that are inflexible and incapable of handling future projects. These organizations often lack the budget to maintain a staff 24 x7, and aren’t able to maintain on-site, cold spares of equipment in case of an outage.
This is where Hardware Infrastructure as a Service (HIAAS) companies excel in removing the hurdles that typical organizations face when trying to building in-house data centers.
There are huge management differences in personnel types within a data center environment. From data center operations to hardware operation; from software engineering to applications support, most staff don’t understand the physical infrastructure management needs of managing HVAC refrigeration, electrical technicians or building maintenance personnel. You can outsource each of these functions, but the risk of an unmanaged resource can easily increase your lead times (this depends on the individual vendor). Using a variety of outsourced vendors further increases the complexity and number of vendor relationships (not to mention points of conflict in facility responsibility).
Areas that must be covered within an enterprise data center include electrical, mechanical, hardware maintenance, cabling and network operations. These areas are also the lowest on the value chain and require scaling before they become affordable.
Let’s look at a purpose built facility versus internal space at a headquarters office.
This is one of the biggest differentiators and is measured based upon scale. A commercial data center / Infrastructure as a Service (IAAS) company will be located in a “purpose-built” facility with the highest “level of operations for 9’s” reliability. This is possible because this activity is the data center provider’s core value proposition, and receives the investment and attention. The systems are more robust and the operations are more mature and refined.
These organizations also give the customer the flexibility to expand and contract in short periods of time with little to no capital expense (CapEx) since almost the entire invoice is structured as operational expenses (OpEx). The decreased time of managing this layer of the OSI model allows a data center client to focus on the higher layers that is the most value producing.
The commercial/enterprise data center facility might also be built to a higher physical standard. Many corporate, internal data centers are small and located in an office park adjacent to the company’s headquarters workforce. Security will almost always require more stringent and better defined measures. The physical separation and accountability by a third party away from any internal sources also eliminates potential strife. Most data breach and damage occurs from within an organization. Who do you trust?
These are just a few of the reasons to host your data center in a purpose-built facility. It won’t necessarily save you money compared to doing it yourself, but you should see a dramatic difference in operations for the same cost of doing it yourself.
I like to compare this to the following example: How many data centers make their own soft drinks and do their own surgery on their employees? Probably none! If they wanted to, they could, but the costs and the lack of quality of the product would probably not make sense. The same example can be applied to the soft drink company or the medical company: would they run their own data center at the same level as an established commercial facility?
The same concept applies for hardware. Hardware is now commoditized. Any last vestiges of differentiation flew out the window last year, as commercial IAAS offerings based on VMware and other virtualization technologies came to market and matured. Now, near-mainframe level, high availability and fault tolerance is available on commodity hardware. Applications have finally been separated from their hardware, and things have never been happier.
It is not logical to make capital expenditure investments in hardware that may be obsolescent within six months. You might get locked into a platform that is costly and stifling to your applications. With the virtual private data center offerings, you are able to trade hardware as you need it, yet not share it with other companies. This allows users to grow and take more capacity as they need it (in hours instead of months). Gartner has even made the predication that 20 % of organizations will not have hardware within the next three years.
When you layer on desktop virtualization, you have the “royal flush of the IT world”. This will be the biggest assistance to IT executives in decades. It will allow those forward and progressive thinkers to be more nimble than their competition.
Holding onto lower value chain duties, such as data center operations and hardware provisioning/support, is a very risky business decision given the reasonably-priced and better performing alternatives. Those that embrace their core value propositions will be the ones to excel in their businesses and performance. Not to mention, it will surely make life easier.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
DLWarrenPosted February 6th, 2010
Hmmm,,, The article is written by a colo provider — of course that has NOTHING to do with it’s slanted view…. LOL
Then making an analogy of a corporate data center to a soft drink vending machines is beyond ludicrous.
And it’s funny how the following is somewhat buried inside the story.
” It won’t necessarily save you money compared to doing it yourself, but you should see a dramatic difference in operations for the same cost of doing it yourself.”
AustinPosted February 7th, 2010
If done right, outsourced hosted operations should be 1/2 to 1/3 the cost of doing it in house. Not a lot of hosting providers do it right, though.
Most people do not look at the Net present value across five years of ALL the costs – CAPEX, OPEX, and SW/HW Support.
The best value for the money today is to put YOUR equipment in THEIR racks and leverage the hosted providers economies of scale on bandwidth. This seems to be the sweet spot especially if you go with open storage.
How is fault tolerance available on commodity hardware? If you mean VMware FT, then you better be ready to do a lot of configuring that involves multiple servers. And even then, your applications will be able to use only one core in the server due to a lack of SMP support. Pity the poor person who has a dozen VMs running on an x86 server that fails. Downtime. Failover. Restarts. Maybe that’s high availability and good enough for some. It’s certainly not full function fault tolerance.
AustinPosted February 11th, 2010
We run 15 – 20 vms per server on open iscsi storage in a vertical configuration. This same stack is mirrored on another separate vertical. Both are fronted by LB. If one vertical is lost, the other is still running. We run full suite of monitoring on the underlying hardware vertical. This is for Mission Critical stuff. We’ve never gone down in this config. It also allows us to patch and do rolling deployments,
For non-mission critical, the VMs are backed up to disk nightly. Since we have aggregate bandwidth to all servers at 40 G over iscsi, restoring a lost server can be done by redirecting the ports to a new server or restoring the VMS to a new server. It takes from 10 minutes to an hour to do this.