Consolidation: Shrinking Our Way to Data Center 3.0

Add Your Comments

Antonio Piraino is Chief Technology Officer at ScienceLogic. Previously, he was vice president and research director at Tier1 Research, a division of the 451 Group, where he focused on managed services and cloud computing. 

ScienceLogic-Antonio-tnANTONIO PIRAINO
ScienceLogic

It’s easy to get lost in the myriad of vendor contexts for the next evolution of data center technology that they’re introducing to the market under the auspices of “Data Center 3.0.” The truth is, Data Center 3.0 is not being defined by a network manufacturer or by the hypervisor second-order effects. It is being defined by a combination of data center architectures, IT technologies, operations practices and management techniques all shifting together to make for a smarter, and more efficient data center design and operations.

Orthogonal to the paradigm swing that is taking place within enterprise datacenter design and operations, is the impact of cloud computing on the modern data center. On one hand, there are many facilities staff that immediately dismiss the threat of the cloud as being either a non-impacting trend that still requires the very same data centers to run, and therefore inconsequential to their daily lives, or are fearful of the hand-off of workloads from their domain to that cloud provider. And it’s the existential reality of what lies between that is helping to drive the Data Center 3.0 concept.

Therein lies the opportunity to learn from the practices of the cloud data center operators – not because they drive better data center design, sustainability programs or operations than your data center, but because they host a single application (i.e. eBay), or a single platform (i.e. Facebook, Salesforce.com) or focus on energy efficiency (i.e. Apple, Google). Although it is well documented that each of these vendors has very large facilities that gain from economies of scale, they also have smaller deployments within other colocation vendors’ facilities.  There is something to be gained from understanding when ownership of a data center or infrastructure is neither an efficient or smart decision to make from a financial viability perspective.

Why Are Cloud Providers Getting There First?

Cloud, hosting and colocation data center operators must meet stakeholder demands for high margins on every Web service sold. As a result, there is a lot to be learned from the way that they drive out inefficiency to meet these demands. In response to the pressure to do more with less, there is a trend toward “just-in-time” data center deployments.

Rather than the traditional 18-month cycle to build a large data center facility that goes underutilized for years until the point of needing more space, there is a rapidly growing segment of the industry that are finding modular and containerized solutions invaluable. Not to be confused with the Dell Black boxes of the past, intended to help sell a greater volume of pre-configured servers, these modularized containers emulate their legacy brethren in everything from fire suppression systems to thermal containment to rack power. In fact, many operators have shown dramatic reductions in facility integration engineering needs, and even greater energy usage reductions from these modular data centers that can be deployed and augmented within weeks of an order.

Like everything in the IT industry, there is no magic solution to all situations. However, the trend toward shrinking, just-in-time, data center deployments is growing, and becoming a significant option in the arsenal of data center operators – so much so that I include this in the conceptual catalog of Data Center 3.0.

How Is The IT Stack Promoting the Shift?

Hand-in-hand with the reduction in data center facilities is the convergence of infrastructure within the data center. This is a departure from the traditional standalone infrastructure that a Dell or HP would have sold you on the compute side; that a Cisco or Juniper would have sold you on the networking side; that an EMC or NetApp would have sold on the storage side; and that a VMware or Citrix Xensource would have sold at the Hypervisor level. The industry can anticipate greater convergence between these infrastructure components and applications in the near future, too. The idea behind the converged infrastructure concept is manifold, and includes the convenience of a pre-configured set of technologies for easier deployment, and expedient management.

The evolution of converged infrastructure has seen a natural progression too, in the monitoring and management technologies associated with giving visibility and control into these architectures. Instead of the age-old swivel chair tactics deployed in NOCs, tools are now providing a single pane of glass view into that IT infrastructure.  In some cases operational business intelligence tools are leveraged to make smart decisions around the appropriate IT resources needed on a per workload, per time of day, basis.

Similarly, greater automation is displacing the manual workflows of modern cloud data centers, meaning that everything from the runbook through remediation can now be handled in drastically reduced times and cost. And what would automation be without really smart business induced policies that are driven by a need to reduce the wastage of energy consumption on workloads that don’t need it at any given moment of the day, or vice versa.  All of these technologies and the associated shift in operational culture are all contributing factors to the modern data center era, otherwise known as Data Center 3.0.

Pages: 1 2

Add Your Comments

  • (will not be published)