Insight and analysis on the data center space from industry thought leaders.

Experience vs. Connectivity: What Should Really Matter to Today’s CIO

The reality is mere connectivity is not cutting it anymore. The rise of "Shadow IT" is not in response to the lack of available applications. Rather, it is an organic movement in reaction to something bigger: overall experience, writes Michael Bushong of Plexxi.

Industry Perspectives

November 11, 2013

4 Min Read
Data Center Knowledge logo

Michael Bushong is the vice president of marketing at Plexxi.




For years, the primary role of the network has been to provide connectivity. Success was defined as all things being able to talk to all other things on the network. But as the function of the IT department shifts from “supporting cost center” to “active participant” in a company's overall value proposition, is taking the lowest-common-denominator approach to declaring success adequate?

The reality is mere connectivity is not cutting it anymore. The rise of "Shadow IT" is not in response to the lack of available applications. Rather, it is an organic movement in reaction to something bigger: overall experience. Simply sidestepping outright catastrophe doesn't constitute a win in the eyes of those who use the infrastructure IT provides.

Changing Role for IT

So how does IT move from measuring connectivity to evaluating overall experience?

The temptation is to build bigger and faster. If the infrastructure is scalable, the experience is guaranteed, right? Actually, no. While the solution might involve bigger, faster and better, the actual problem isn't inherently tied to these three favorites. The real issue at hand is that experience, up until now, has been an implicit agreement between IT and the users it serves. It has been guaranteed through adherence to a set of minimums.

The IT department needs to ask: If a new application is being turned up, what kind of server is required? How much storage is necessary? How many 10Gbs ports are required to connect to the network?

Answering these questions are all minimum requirements. The notion that meeting these requirements will somehow guarantee a nondescript user experience without any real parameters is optimistic at best. And even if these physical requirements are all met, does that guarantee a satisfactory end-user experience?

Take the ubiquitous conference call as an example. It is almost comical how difficult it is to get a conference call with any kind of media support to run smoothly. People expect things to break, so they start important calls 20 minutes early to allow enough time to troubleshoot whatever A/V or screen-sharing issues that arise. All too often, midway through the meeting the screen becomes pixelated or call quality renders every third word unintelligible. In these moments, the system is certainly connected–traffic is getting through–but the user experience is horrible.

Setting the Bar Too Low

Connectivity isn’t enough to guarantee experience. Avoiding calamity, while still a requirement, is too low a bar for next-generation IT.

Taking control of the end-user experience starts with moving from implicit to explicit definitions of expectations. In the case of a conference call, success might be measured partly in terms of availability, but it is likely to also include things like guaranteed transmission throughput and traffic drop rates. For applications like CRM and ERP, experience may be determined by application response times for certain types of workloads. For new application deployments, experience is likely related to time-to-deploy.

The underlying message is that CIOs need to move to an environment where the most critical application experience is defined in explicit terms that are concrete and well-understood by all parties. Clear definitions move experience from an unquantifiable, unqualified observation to an explicit contract between user and IT. Setting expectations in concrete terms helps CIOs eliminate uncertainty while introducing a degree of objectivity that is useful in reporting success metrics.

Moving to a Measurement-Driven Infrastructure

However, such a move does not come easily. A metrics-driven infrastructure requires a degree of instrumentation and data correlation that simply does not exist in most IT shops today. In traditional IT organizations, reporting happens along infrastructure silo lines. Server teams are well aware of capacity and utilization, storage teams understand IOPS, and network teams are well-versed in throughput and availability. But application experience is not limited to a single silo.

If, for example, a critical application experience service-level agreement (SLA) is tied to system response time, it becomes necessary to instrument all aspects of that application and combine the results into a single, reportable metric. This level of orchestration is difficult enough without making explicit infrastructure, tooling, process, and organizational decisions. If the CIO is ultimately accountable to the CEO for providing a responsive, productive work environment, anything short of this is simply hand-waving and hoping for the best.

CIOs that want to move away from lowest-common-denominator practices need to understand the changing landscape around orchestration, analytics, DevOps and software-defined everything. Each of these brings a piece of the solution. How they work together in a real-world setting might not be obvious, but those who solve this problem first will create a meaningful, competitive advantage through their corporate IT function.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like