Skip navigation
The Trend for IT: Big Computing Version 2

The Trend for IT: Big Computing Version 2

There is a new generation of IT, writes Mark Harris of Nlyte Software. And in it, centralized and heavily managed resources are becoming king again.

Mark Harris is the vice president of data center strategy at Nlyte Software with more than 30 years experience in product and channel marketing, sales, and corporate strategy.

Information Technology has been with us for 60 years! It's hard to believe but the first commercial mainframe was deployed in the early 1950s and with that introduction the world was forever changed. Information could freely be captured, massaged and reported. Analysis of information happened in seconds rather than weeks or months.

Information Technology became a new industry full of pioneering innovation with the common goal to facilitate that management of information to derive more value from it.

The first generation of IT

Keeping in mind that a generation is defined as a span of time equal to 27 years, the entire first generation of Information Technology (IT) is best characterized as centralized problem solving. Call it Big Computing Version 1.

Large centralized computing was done in elaborate structures, dominated by IBM. These large computing centers were expensive and therefore tasked to solve large problems. Business users punched control cards and later sat on the periphery of this massive computing capability while gazing at their ASCII CRT screens and took arms-reach sips of that massive centralized computing power. The typical IT new service delivery project spanned a year or more.

Distributed computing on a global scale

In the mid-1980s distributed systems arrived (due primarily to the invention of Ethernet and the inexpensive CPU) and the IT industry almost instantly took a 180 degree turn, moving computing close to the user.

Business problem solving could now be done on $2500 x86 machines which sat on the desktop. Every user got one of these devices, and everything in IT was networked which allowed information and resources corporate-wide to be accessed as if they were local to each user.

With the invention of the Internet in the mid-1990s this network-enabled and distributed model of corporate computing was extended to also include access to information that was available elsewhere.

Centralized computing meets distributed users

While the current model of computing is still dominated by this Internet-enabled distributed processing model, the whole world of IT is going through foundational changes due to public clouds, tablet/handheld, virtualized desktop initiatives and social media.

We are going back to a world where heavy processing is once again happening in hyper-scaled data centers. We are going back to the model of big centralized processing, with fairly thin user viewports that require little if any maintenance or support. Users gain access to enormous processing power and diverse information that resides in big processing centers. Remember that this big processing also relies on big networking and big storage. So, while the term “Big Data” has already become part of our commonplace vernacular, it only focuses on the storage access aspects of a bigger computing plan.

Coming of age, big computing

Perhaps thinking more broadly than just the data itself, we should start referring to today’s new generation as the era of “Big Computing Version 2” which would describe the huge hyper-scale data centers which house all of this enormous and centralized back-end processing.

Facebook, Apple, Google, and Amazon are all examples of these hyper-scale centers that are powering the back-end of everything we do today.

Big Computing Version 2 is much more than just Big Data. It also includes big networking, big storage, big processing and big management.

The key to success: big management

Big management is perhaps the most critical component to include in the strategic plan for these hyper-scale processing centers. The core economics of these data centers are based upon the ability to understand and optimize costs.

At the transaction level, the management of the data center itself drives the cost of processing those transactions. Big management is about managing those transaction costs through a wide range of mechanisms, physical and logical.

Big management sets the stage for data centers that can closely align supply and demand. Keeping in mind that the demand for processing changes every second, it’s easy to see how continuously optimizing the status of servers can dramatically affect the bottom line. From a business standpoint, the economics associated with Big Computing Version 2 will be defined by big management.

Value-oriented innovation

With Big Computing Version 2, there simply is no limit to the amount of processing that can be put together to handle any type of problem. Users can interact with all resources in a highly interactive fashion. Users can begin to march down a path to solve business challenges long before they understand the exact steps required to get there, or without even having to know precisely where they will arrive. Through their creativity and imagination, they can try various approaches and scan vast amounts of collective knowledge sets, attempting to solve their problems, in real-time.

The main difference from the first generation of Big Computing (25 years ago), is the need to think at the service delivery and transaction level. Historically the total cost for traditional IT was sunk into the overall company’s budget and then apportioned based upon simple and many times absurd units of measure (such as employee counts). Every group paid their fair share of the total cost for IT, regardless of whether they used it or not. Today, organizations are looking to tie their cost for IT directly to their actual usage. Big management is a critical part of that IT costing requirement.

Call it what you like: big computing V2 is here!

Regardless of what this new generation of IT is called (let me suggest, Big Computing Version 2), centralized and heavily managed resources are becoming king again. The mainframes have been replaced or augmented by dense server clusters and farms, applications have been decomposed and built on resilient scalable platforms and the ASCII screens have been replaced by thin tablets and handheld devices.

Big Computing Version 2 sets the stage for everything we do at work or at home. It enables the world’s knowledge to be gathered, centralized and accessed and will be the standard fare for years and years to come.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish