The Fruits of Innovation: Top 10 IT Trends in 2014

3 comments

Mark Harris is the vice president of marketing and data center strategy at Nlyte Software with more than 30 years experience in product and channel marketing, sales, and corporate strategy. Nlyte Software is the independent provider of data center infrastructure Management (DCIM) solutions.

mharrisMARK HARRIS

Nylte Software

The IT industry and its data centers are going through change today at a breakneck pace. Changes are underway to the very fundamentals of how we create IT, how we leverage IT, and how we innovate in IT. Information Technology has always been about making changes that stretch the limits of creativity, but when so many core components change at the same time, it becomes both exciting and challenging even for the most astute of us IT professionals.

The changes we’re due to see in 2014 start with the way people think. A good bit of the change going on in IT is about the maturity of its business leaders and their business planning skills associated with all of those changes. In the end, these leaders are now tasked to accurately manage, predict, execute and justify. Hence, the CIO’s role will evolve. Previously, CIOs were mostly technologists that were measured almost exclusively by availability and uptime. The CIO’s job was all about crafting a level of IT services that the company could count on, and the budgeting process needed to do so was a mostly a formality.

Best Qualities in a CIO

The most effective CIOs in 2014 will be business managers that understand the wealth of technology options now available, the costs associated with each as well as the business value of each of the various services they are chartered to deliver. He or she will map out a plan that delivers just the right amount service within their agreed business plan. Email, for instance, may have an entirely different value to a company than their online store, so the means to deliver these diverse services will need to be different. It is the 2014 CIO’s job to empower their organizations to deliver just the right services at just the right cost.

For technologists, 2014 will be a banner year for change at a nearly unprecedented rate.. If we look back at the IT industry’s history overall, it really started about 60 years ago with IBM’s first commercial release of the mainframe, became a distributed computing world in the late 1970’s, transitioned to an Internet connected world in the mid-1990’s, and then exploded into the current generation of dynamic abstracted computing beginning in the mid-2000s. This new approach to computing puts a tremendous emphasis on the back-end data center services rather than the capability of the end-user’s device. After a few false-starts (like the netbook), the mature web-based, handheld, mobile and VDI revolution has become a cornerstone of computing, and it has become a race to put all of the actual computing back into the data center and do so in a modular fashion. That said, most existing data centers pre-date this dynamic period and hence their entire supporting infrastructure is mostly ill-prepared to handle this dramatic shift towards cost-oriented data center services.

What Lies Ahead?

In 2014, we see much of the fruit of this race for innovation. These trends for 2014 are not just niche possibilities nor proof-of-concepts, but are those that we will see the rapid production-level adoption.

1. Big Data is finding its footing as the initial hype has settled and its commercial applicability has been demonstrated across all major industries. Big Data has become one of the biggest topics for the Enterprise. The premise is simple, put all of the historical transactional and business knowledge in one place, and then use various analytic means to extract actionable inferences. The keys to Big Data’s promise are just how to put ALL of a company’s data logically in one place, and then when this massive data set has grown to over 2TB (the very definition of Big Data), how to retrieve various forms of meaningful and actionable knowledge from the dataset interactively. This reality is now possible and in 2014, many end-users will move their Big Data experimental pilots into full production. The Big Data vendors will also keep progressing with support for even large memory-based data structures, real-time streamed data and even more advanced data mining techniques based upon the maturity of their own unique technology. Expect to see many of the world’s largest organizations looking to consolidate their many discrete data stores into one logical Big Data structure and then in some pretty innovative ways, looking for ways to interpret this wealth of knowledge into impressive new business drivers.

2. The Software Defined Networking (SDN) market will continue to be slightly fragmented, but adoption rate for SDN will grow. In 2014 we will see many networking vendors attempting to differentiate themselves in the SDN space. Important to remember is that Software Defined Networking has two key components, the intelligent controller and the physical switches. That said, there is more than one technical approach to these SDN systems, so the end-user community will be faced with making some choices. The largest of SDN vendors will each refine and articulate their specific approach to SDN, and attempt to help their prospects understand why the combination of their robust switching and controllers has the most value. The common premise of these vendors will be that SDN allows better value through economies of scale and choice in a multi-vendor interoperable world.Many SDN vendors will focus on differentiation by demonstrating overall performance and/or capacity. Appliance-level switch and controller comparisons and performance benchmarks will emerge (much like the gigabit switching revolution 10 years ago) and interoperability and scale will be a hot topic. In 2014 the rate at which SDN will be adopted by the end-user community will increase significantly as all of the mainstream players have now announced and delivered compelling components in their SDN space.

3. Asset Lifecycle management will be adopted to control costs. The traditional goal of IT was to assure that applications worked as needed and remained available to their users. The costs to provide this were viewed mostly as a purchasing department’s challenge. Shave a few percentage points off of the purchase of servers or switches and everyone considered the job done well. As a result, it has been commonplace to leave equipment in service for years and years and ignore the economic impacts of doing so. If it didn’t break, don’t fix it mentality. In 2014 we will see an increasing rate of adoption of innovative new tools that manage these physical devices as business assets, with standard accounting processes and costs being addressed. Thanks in part to the abstractions now available through software-defined approaches, physical devices are no longer tied so tightly to their installed applications so the complexity of making hardware changes (including those demanded by technology refresh cycles) has been greatly reduced. It is becoming very apparent that servers or switches that exceed their depreciation and warranty schedules can easily be replaced to reduce operating costs, so the adoption of new tools to manage all of this change will become a higher priority. Data Center Infrastructure Management (DCIM) tools will take their place as the strategic business management solution to manage all of the costs associated with this change.

4. The Server is being redefined. For the past 20 years, a server was generally defined as an x86-compatible chip that was housed in a 19-inch rack-mounted package. Every server in the data center fit this definition. Sure there were many differences in how fast each model of server was, or the amount of memory, disk or I/O capacity, but everything was essentially the same server architecture at the core. In 2014, we see dramatic alternatives. The major server vendors and a handful of startups will offer both low-power x86 designs as well as even lower-power “ARM” and “Power” chipsets. The cost of electricity is a major component of operating costs, so innovative ways of processing data on a lower power budget just makes sense. It turns out many applications are perfectly suited for these new low-power CPU options and in 2014 we will see these applications being touted as wild successes. Surprisingly, some of these new low-power CPU designs rival the IOPS performance of their x86 cousins! In addition to the CPU chip changes, we will see form-factors shipping in volume that are intended for dense applications. Traditionally we had just two form-factor choices; 1) proprietary chassis or 2) Standard 19-inch. In 2014 we will see an onslaught of half-width servers and we will also see the first version of an alternative server packaging scheme referred to as Open-Compute. Open-Compute has been pioneered by companies like Facebook and Google and is a standardized ‘open’ form-factor. Think of this design as the Open-Sourced version of server hardware. In 2014 we will see initial adoption of Open-Compute by a much wider commercial audience.

5. The Public Cloud grows up. Once viewed as a platform for only a limited set of specialized applications as well as a bit of a non-critical platform, the Public Cloud has struggled to take a primary spot at ALL of the IT tables. In fact, the Public Cloud industry’s report card for the last few years would reveal a mixed bag of successes and failures which has stunted it’s growth. At these critical point in the adoption curves, we have seen high-profile failures limiting its strategic adoption. Leveraging Public Cloud technology also requires some fundamental re-education and new economic modeling which is all new to most IT organizations. In 2014, we will see the largest Public Cloud players pro-actively discuss these failures and provide detailed insight into how these issues will be addressed to assure they do not happen in the future. In 2014 the mainstream virtualization vendors will extend their capacity and migration tools to span from in-house to Public Cloud-based instances. As computing capacity is used up internally, it will become commonplace to simply expand to the Public Cloud for demand peaks. In addition, the Open Source world of public cloud computing (dominated by OpenStack) will see its newest release (“Havana”) open some eyes into what is possible in a multi-vendor Cloud world. In 2014 OpenStack’s Havana now supports OpenFlow controllers (one of the main multi-vendor approaches to SDN), OpenvSwitch & and VMware’s NSX. Havana will also deliver support for orchestration including direct support Amazon’s CloudFormation templates. Finally, significant user-level accounting will appear making OpenStack a business-grade option in a multi-tenant world. OpenStack and the tools that are built for it will likely prove to be one of the biggest enablers in creating a multi-vendor Public Cloud.

Pages: 1 2

Add Your Comments

  • (will not be published)

3 Comments

  1. ed sterbenc

    Great insights and a fun read !

  2. Excellent summary.

  3. "...when this massive data set has grown to over 2TB (the very definition of Big Data)" Hi Mark, I'm pretty sure you meant 2 petabytes. I have two terabytes right here on my desktop computer! Best, Henry