Skip navigation

Enterprise Networks: More Ways to Control the Major Drivers of OpEx

Here are five architectural decision points to consider to help keep your operational expenses in check, according to Michael Bushong of Plexxi.

Michael Bushong is the vice president of marketing at Plexxi. This column is part two of a two-part series looking at the costs factors and controlling costs in your networking infrastructure. The first column, 5 Major Drivers of OpEx in the Enterprise Network, was published previously.

bushong_michael-tnMICHAEL BUSHONG
Plexxi

Operational expenses in the enterprise networks consist of the total cost of ownership for network devices as well as the underlying data center architecture. In my previous post, I outlined five of the major drivers of operating expenses in the enterprise network. Now that we understand the source of these expenses, controlling these cost drivers needs to be a primary objective when designing all data centers. Here are five architectural decision points to consider to help keep your operational expenses in check:

1.Start with fewer devices
Given the role that the number of devices plays in driving long-term operational expense, the most important decision a data center architect can make is the foundational architectural approach. To control costs, architects should favor designs that require the fewest number of devices possible. Legacy three-tier architectures are already being replaced by more modern two-tier approaches. As technology continues to evolve, two-tier architectures are being supplanted by completely flat designs. To the extent that these flat designs can reduce the total number of devices in the network, they can dramatically improve long-term cost models.

2.Utilization matters
Capacity costs can be measured in simple price-per-unit terms. If the usable capacity is only a fraction of what is available, the price per unit increases. Low utilization also requires higher capacity overhead. Architects should consider how to drive higher network utilization so that they can take advantage of better economics. This is likely to be a function of solution capability, so buyers will need to augment purchasing criteria appropriately.

3.Architect for uptime
Architecture has a profound impact on network downtime (scheduled or otherwise). Architects need to pay careful attention to failure and maintenance domains, resilience features, and upgrade procedures. Further, customers should consider how short-term capital costs amortized over the life of the equipment compare to longer-term downtime trends. These evaluations are particularly important where fees and penalties are concerned (as with managed or cloud services). Additionally, the cost of even minimal downtime for certain applications can exceed capital costs (i.e. ecommerce, financial services, and so on).

4.SDN should provide relief
The central control model that SDN promotes provides a single point of administration, which will drive maintenance costs down. Accordingly, buyers should consider SDN controller-based capabilities as a top-tier purchasing criterion for data centers where cost is important.

5.DevOps is still in its formative stages
Automation is clearly the future for most large-scale data centers. The transition to a fully automated environment will largely depend on a management discipline and a tooling ecosystem that are both still emerging. This is generally referred to as Development Operations (or DevOps). Put simply, DevOps provides a tailored glue layer between management models maintained by a combination of management frameworks (Chef, Puppet, Ansible, and so on) and in-house programming staff.

Because DevOps is in its formative stages, it is impossible to predict with any kind of precision which tools will ultimately win. It is highly likely that companies will, in fact, operate with some mix of commercial and homegrown tools designed to meet their specific requirements. We’re already seeing this happen. It has resulted in a fractured operational tooling landscape. Point tool integrations will be handled case-by-case, typically driven by significant revenue opportunities. This will cause a scattered DevOps tool support matrix that will not perfectly match most customer environments.

Until DevOps frameworks become richer and natively support more management models, expect to see higher in-house development costs to maintain a fully DevOps-automated environment. Accordingly, data center architects will need to consider not just operational tool support but also ongoing integratability of tools within the architecture. This should favor DevOps-friendly solutions that have built underlying data service infrastructures that allow for repeated integration with new tools.

Restructuring any data center architecture begins with minimizing complexity. The ultimate measure of effective design, though, is whether complexity (and associated cost) remains low as applications place additional capacity and management requirements on the infrastructure. Taking these architectural points into consideration in the design phase of your data center will help keep the amount of unnecessary operation expenses to a minimum, and eliminate the over complexities that are often associated.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish