Skip navigation
Build or Lease? Creating the Best Data Center Strategy
(Photo by Sean Gallup/Getty Images)

Build or Lease? Creating the Best Data Center Strategy

Here’s what infrastructure pros need to know when making the most important decisions of their career

Editorial-Theme-Art_DCK_2016_April

The set of factors companies have to consider when devising their data center strategy has changed drastically. They have to take into account things like cloud services, mobile workforce, delivery of services at the network edge on top of the traditional requirements to maintain uptime and anticipate growth in capacity requirements. This April, we focus on what it means to own an enterprise data center strategy in this day and age.

The volume of data generated today is growing at an astonishing rate, and demand for data center space has reached an all-time high, consistently outpacing supply in the top markets. Many organizations are struggling to develop effective data center strategies, frequently facing the familiar question: build or lease?

A decade ago the answer was easy: build. At that time, colocation services were not an ideal solution. Fraught with concerns over technology deficiencies and adoption roadblocks, there was too much risk associated with colo to make it a viable part of the data center strategy for many companies. But times have changed and today colo solutions have overcome many of the real and perceived roadblocks. There are, however, still scenarios where it makes sense for a company to consider hosting its own data.

There are a number of strategic factors that can influence these decisions, which generally fall into four buckets: capital; application purpose and requirements; control; perception of security and risk. While many of the factors are analytical in nature (such as financial savings), there are also cultural preferences within companies that influence the strategy.

Related: Google to Build and Lease Data Centers in Big Cloud Expansion

CAPEX versus OPEX

For many organizations, the built-versus-lease is influenced by cash flow preferences. Specifically, whether an initial large capital expenditure (CAPEX) or smaller ongoing expenditures (OPEX) make sense for your business. Simply put: does your company prefer to spend your money all at once or smaller amounts over a long period of time?

The initial expenditures of data center construction, including network and utility installation costs, can add up to thousands of dollars per square foot. In addition to the upfront cost of building the data center, companies must also factor in the highly variable CAPEX and OPEX of building and operating in-house. While the 10-year total cost of ownership may favor upgrading or building over leasing, the colo model is an attractive alternative for businesses that are sensitive to large capital expenditures. Colo offers fixed monthly costs related to leasing data center space based on actual needs and usage.

Colo is also appealing when the business has other priorities for capital investment, which are often projects that further specific business goals. A healthcare provider, for example, may prioritize building hospitals rather than data centers. For younger companies experiencing exponential growth, it is often a prudent data center strategy to use colo to reserve capital. On the flip side, several of my clients have had free capital they needed to invest for financial and tax purposes.

Related: Will the Hybrid Cloud Vanish into Thin Air?

The decision must also factor in a company’s present and future data center requirements. The average lifecycle of a data center is 15 to 20 years, during which the facility may go through as many as five IT equipment refresh cycles. Therefore, it is imperative to consider scalability. Future growth will directly affect data center needs, and your data center strategy decisions should include the ability to be flexible. A colo solution offers more scalability without being tied to CAPEX. For a mature company with a large data center footprint and moderate growth forecasts, however, it often makes sense to retrofit an existing data center, stay within the current footprint, and ultimately deliver more bandwidth for the buck. In each scenario, growth plans are a key factor to consider.

Application Purpose and Requirements

Not all applications are equal, so they don’t require equal infrastructure. Understanding how you use the data is crucial. Simply put: what is the purpose and what are the requirements?

As technology improves, applications today are more forgiving in terms of latency. However, in manufacturing and heavy database applications, latency remains a major concern. Case in point: a mid-sized engineering firm made the decision to move to colocation based on a cost savings analysis. Their management didn’t want to be in the “data center business.” Shortly after the move was completed, the IT team was inundated with calls from engineers struggling with productivity challenges due to latency issues caused by servers that were moved from the on-premise data center to a colo facility. Had the management team dug beyond the financial costs and evaluated each application’s purpose and requirements, the reduced productivity could have been prevented. Instead, the company was faced with a new dilemma: live with the latency, move to a closer colo, or move the applications back .

Another major data center strategy consideration is legacy equipment and applications. Often buried deep within a data center, legacy equipment and applications are ingrained within a company’s infrastructure, often with no way to shut it off or relocate it. The cost to upgrade often overshadows the benefits, so companies continue to maintain legacy applications. That was the case for a large multinational company. They had a custom application running some of their CRM applications since the 1990s. An upgrade would cost millions, and the system is so ingrained in the company’s infrastructure, they can’t move off the application. As a result, the company has to maintain an onsite data center to accommodate the legacy application.

Read more: Juniper CIO on Going All-In with Cloud

Risk and Perception of Security

To a certain extent, risk can be measured and mitigated. Perception of security however is difficult to quantify and even harder to mitigate. Recognizing your company’s risk tolerance is one thing, but understanding the existing perception of security needs is key to deciding whether to build a data center or to lease one. Simply put: how risk averse is your organization, and how secure does your organization think your data needs to be?

I often see companies that want control of their intellectual property at all times to avoid any type of risk. Too often, these organizations have convinced themselves that they are the only ones that can protect their data. This often isn’t the case.

The biggest roadblock for colo solutions is the perception that their facilities are lacking in network or physical security. The reality is that most colo networks are very secure and often more secure than corporate networks. Their physical security is comparable to or better than that of some of the most high-profile financial institutions. Because their business depends on it, colo security is monitored, maintained, and updated at a higher level than in most other organizations in the world.

Deciding factors in data center strategy surrounding capital and risk are the resources and personnel needed to maintain the required level of security and the physical conditions of an in-house data center. Many times the decision can be answered by asking: “What are the real and perceived risks associated with moving my data to a colo, and what are the costs for keeping it in-house?”

Control

Hand-in-hand with risk is control. Company culture or philosophy about outsourcing can be a deciding factor when it comes to colocation. The culture of keeping things in-house is often due to the importance of not losing control of the data center. Rooted in reason, control is partly an emotional reaction to fear of “what-if” scenarios, potential loss of responsibilities, or a headcount reduction.

The most prevalent control issues are having onsite staff and access to servers at all times. To address these concerns, many colo providers offer remote-hands technicians that help with basic tasks like rebooting a hung server. Nonetheless, IT equipment accessibility should factor into the decision. If a colocation provider is chosen, it is important to address whether the colo support staff have adequate access in a reasonable timeframe, should the need arise.

Location is also a key consideration in data center strategy. Beyond latency reasons, many organizations like having their data within a few miles of their offices. Proximity to their data offers a sense of security and more than one IT director has told me that they like to be able to “walk in and touch and feel it.” For many folks, having an in-house data center helps them sleep better at night.

For some, having control of their infrastructure is core to their business and without it, the ramifications of “something happening” outweighs the cost savings. Consider a multi-million-dollar online financial transfer company. With thousands of transactions per hour, the cost of going down is financially devastating, and the risk is too great to outsource to a third party. In other scenarios, organizations with regulatory compliance requirements need to have operational controls and the ability to audit those controls in place. However, many multi-tenant data center providers understand that they become an extension of the tenant’s infrastructure and therefore must meet ever-changing compliance and regulatory requirements.

Best of Both Worlds

The decision to build or lease a data center isn’t an easy one. The first step in creating a data center strategy is recognizing the nuances in each situation as well as the attitudes and perceptions that exist within an organization. In some instances, it clearly makes sense to build a data center and in others it simply isn’t feasible.

For some, the decision rests in a compromise where risk and control factors can be managed and the organization can benefit from outsourcing non-critical applications. Simply put: the solution lies in figuring out where the eggs need to be and in which basket. Determine which solution can provide the higher reliability and redundancy and place the most critical data in that basket.

About the author: Tim Kittila is director of the data center practice at Parallel Technologies, overseeing the company’s data center consulting services. He previously served at the company as director of data center infrastructure strategy and was responsible for data center design and build solutions and led the mechanical and electrical data center practice. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

TAGS: Colocation
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish