Consolidation: Shrinking Our Way to Data Center 3.0

Like everything in the IT industry, there is no magic solution to all situations. However, the trend toward shrinking, just-in-time, data center deployments is growing, and becoming a significant option in the arsenal of data center operators, writes Antonio Piraino of ScienceLogic.

Antonio Piraino is Chief Technology Officer at ScienceLogic. Previously, he was vice president and research director at Tier1 Research, a division of the 451 Group, where he focused on managed services and cloud computing. 

ScienceLogic-Antonio-tnANTONIO PIRAINO
ScienceLogic

It’s easy to get lost in the myriad of vendor contexts for the next evolution of data center technology that they’re introducing to the market under the auspices of "Data Center 3.0." The truth is, Data Center 3.0 is not being defined by a network manufacturer or by the hypervisor second-order effects. It is being defined by a combination of data center architectures, IT technologies, operations practices and management techniques all shifting together to make for a smarter, and more efficient data center design and operations.

Orthogonal to the paradigm swing that is taking place within enterprise datacenter design and operations, is the impact of cloud computing on the modern data center. On one hand, there are many facilities staff that immediately dismiss the threat of the cloud as being either a non-impacting trend that still requires the very same data centers to run, and therefore inconsequential to their daily lives, or are fearful of the hand-off of workloads from their domain to that cloud provider. And it’s the existential reality of what lies between that is helping to drive the Data Center 3.0 concept.

Therein lies the opportunity to learn from the practices of the cloud data center operators – not because they drive better data center design, sustainability programs or operations than your data center, but because they host a single application (i.e. eBay), or a single platform (i.e. Facebook, Salesforce.com) or focus on energy efficiency (i.e. Apple, Google). Although it is well documented that each of these vendors has very large facilities that gain from economies of scale, they also have smaller deployments within other colocation vendors’ facilities.  There is something to be gained from understanding when ownership of a data center or infrastructure is neither an efficient or smart decision to make from a financial viability perspective.

Why Are Cloud Providers Getting There First?

Cloud, hosting and colocation data center operators must meet stakeholder demands for high margins on every Web service sold. As a result, there is a lot to be learned from the way that they drive out inefficiency to meet these demands. In response to the pressure to do more with less, there is a trend toward "just-in-time" data center deployments.

Rather than the traditional 18-month cycle to build a large data center facility that goes underutilized for years until the point of needing more space, there is a rapidly growing segment of the industry that are finding modular and containerized solutions invaluable. Not to be confused with the Dell Black boxes of the past, intended to help sell a greater volume of pre-configured servers, these modularized containers emulate their legacy brethren in everything from fire suppression systems to thermal containment to rack power. In fact, many operators have shown dramatic reductions in facility integration engineering needs, and even greater energy usage reductions from these modular data centers that can be deployed and augmented within weeks of an order.

Like everything in the IT industry, there is no magic solution to all situations. However, the trend toward shrinking, just-in-time, data center deployments is growing, and becoming a significant option in the arsenal of data center operators – so much so that I include this in the conceptual catalog of Data Center 3.0.

How Is The IT Stack Promoting the Shift?

Hand-in-hand with the reduction in data center facilities is the convergence of infrastructure within the data center. This is a departure from the traditional standalone infrastructure that a Dell or HP would have sold you on the compute side; that a Cisco or Juniper would have sold you on the networking side; that an EMC or NetApp would have sold on the storage side; and that a VMware or Citrix Xensource would have sold at the Hypervisor level. The industry can anticipate greater convergence between these infrastructure components and applications in the near future, too. The idea behind the converged infrastructure concept is manifold, and includes the convenience of a pre-configured set of technologies for easier deployment, and expedient management.

The evolution of converged infrastructure has seen a natural progression too, in the monitoring and management technologies associated with giving visibility and control into these architectures. Instead of the age-old swivel chair tactics deployed in NOCs, tools are now providing a single pane of glass view into that IT infrastructure.  In some cases operational business intelligence tools are leveraged to make smart decisions around the appropriate IT resources needed on a per workload, per time of day, basis.

Similarly, greater automation is displacing the manual workflows of modern cloud data centers, meaning that everything from the runbook through remediation can now be handled in drastically reduced times and cost. And what would automation be without really smart business induced policies that are driven by a need to reduce the wastage of energy consumption on workloads that don’t need it at any given moment of the day, or vice versa.  All of these technologies and the associated shift in operational culture are all contributing factors to the modern data center era, otherwise known as Data Center 3.0.

What’s Missing Then?

What about the attachment between the data center facilities operations teams and the IT smarts that are being deployed? Enter the missing link in the Data Center 3.0 chain that is being bridged by the cloud providers. We are beginning to see the acceleration of intelligent tools that help enterprise users make decisions. For example, a decision about where within an Amazon Web Services (AWS) data center a workload belongs. These decisions are not static. They are dynamic and based on learning of the performance and pricing of an instance within a given region, at different times, and also on the ability to automatically orchestrate the migration or spawning up of instances in different AWS zones based on these metrics.

The AWS example is not coincidental. The issue with all of these technologies and operations practices is that cloud providers are driving them. And herein lies the decision for modern data center owners and operators. With the growing reality of Data Center 3.0 technologies and practices abounding amongst cloud providers one can chose to either contend, leverage their best-practices for your own data center operations or begin migrating workloads to those cloud provider.

Ironically, even smaller cloud and managed hosting providers need to make the same decision. In the face of a reduced need to immediately construct another large data center in the same ways as the past, cloud computing offers cost-effective and highly secure services that can be leveraged temporarily, or permanently, for specific workloads and services. And if that is not an option, then borrowing best practices from those providers is another option.

Perhaps the biggest challenge in making this decision is the gap that continues to exist between facilities teams and IT teams. Each team has its challenges and there is a mismatch in the longevity of lifecycle between facilities upgrades and IT refreshes, and even in the related employees and their associated projects. However, the common point of interest is the need to deliver optimized service and cost efficiency back to the business they represent. And while some data center practices have recognized the need to a hybrid executive that understands both worlds, the onus is on both groups to work together, exploiting each one’s technologies, to create further efficiency’s in the data center.

IT managers must think more intelligently about data centers and power centers. Likewise data center facilities operators, need to eliminate the temptation to throw their hands up in the air, in frustration over the lack of explicit power needs projections’ from their IT peers. Both groups need to stop thinking purely about the impact of power and data centers, and instead shift the focus to merging the management of power and IT workloads in sync with each other to produce smarter data center operations. That also requires data center operators and IT managers to act as service brokers between and amongst their own in-house resources, and, as needed, external cloud resources, based on all of the modern tools at their disposal. That is ultimately what Data Center 3.0 really represents.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish