Integration of Data Center Services and Systems

1 comment

Akos Batorfi Racks

IT racks bathed in blue light (Photo by Akos Batorfi via Flickr)

Jerry Gentry is Vice President, IT Program Management at Nemertes Research

I was looking at a summary of updates from a technology review service the other day and the top five articles referenced were all big IT companies buying smaller IT companies.  That is not that big a deal.  It happens all the time. What struck me was who was buying whom. The big IT companies were buying smaller companies who had special data center equipment or services.

The trend started a few years ago when the big data center players like HP, Cisco, IBM and others began the integration march.  They brought equipment to the market that integrated blade processor technology with network chassis to provide a more consistent and high capacity alternative to data center managers.  This first foray into integration showed great promise, but took a real commitment on behalf of the customer to go through the pain of a refresh in a data center that is a production facility.  New sites were probably the best candidates for early adoption.

The next wave of data center integration is now starting.  More and more services are being integrated into a single service offering. The question is no longer whether to use the integrated switch/processor components, but how much integration is acceptable.  There are implications to this decision and only your specific needs and requirements can determine how they play.

First, you can’t make an isolated decision.  The tie between the network infrastructure and the processor infrastructure are now tightly coupled. As a data center manager, you need to be either conversant in or supportive of the network requirements.  Like any organic structure, the network is an amalgam of functions that all balance with each other.  Those balanced services include, but are not limited to:

  • Port count
  • Port speeds (10/100/Gig/10Gig, half and full duplex)
  • LAN and VLAN configurations
  • Primary, secondary, backup and management networks
  • Network hierarchy (access, distribution and core)
  • Network separation (production versus development)
  • Network security

Second, you have to know what is driving your costs.  In the days of discrete infrastructure you could easily calculate the cost of a server, storage and associated network. With increasing virtualization, the cost model has to be shifted to reflect a dynamic service delivery. For a short time it will be possible to keep models that are effectively static – a flat cost per month per server/network component.  That won’t last forever.

Third, the world of cloud is going to start changing how people feel they should pay for services from your data center. Whether your end users subscribe to cloud services or not, there will be publicity about a more pay on demand methodology for services.  It will start with the hardware and access but soon move to the application level.

You don’t need to jump there right now, but starting to understand the true cost by service element of what you currently provide will give you a stepping stone for migrating to that method when the demand is there.  We’ll talk more about those service elements in our next post.

To get more useful data center management strategies and insight from Nemertes Research download the Q2 Data Center Knowledge Guid to Enterprise Data center Strategies – Volume 2.

About the Author

Jerry Gentry is a research analyst for Nemertes Research.

Add Your Comments

  • (will not be published)

One Comment

  1. I'm a juniour NOC engineer at a data centre in the UK, and I must say the content on this blog is teaching me LOADS! Thanks Jerry :D