Rackspace's recently launched bare-metal cloud service called OnMetal runs on custom-modified Open Compute servers (pictured). (Photo: Rackspace)

Rackspace's recently launched bare-metal cloud service called OnMetal runs on custom-modified Open Compute servers (pictured). (Photo: Rackspace)

Is the Commodity Data Center Around the Corner?

Add Your Comments

The data center is changing. We have new methods of cooling, optimizing the data center and even the utilization of green energy through next-generation geothermal technologies. The insides of the data center and what goes into the rack has been changing as well. New platforms around consolidation, server technology and cloud computing are all impacting how we process and utilize resources.

The conversation around custom-build servers, networking components and now storage has been heating up. The concept of a commodity data center is no longer locked away for mega-data centers or large organizations. Looking at Google as an example, you’ve got an organization which builds its own server platform by the thousands. In fact, Google has developed a motherboard using POWER8 server technology from IBM, and just recently showed it off at the IBM Impact 2014 conference in Las Vegas. DCK’s Rich Miller recently outlined how “POWER could represent an alternative to chips from Intel, which is believed to provide the motherboards for Google’s servers.”

But can this translate to the modern organization? Can SMBs and even larger IT shops adopt the concept of a commodity data center? Let’s look at some realities behind what is driving the conversation around a commodity data center, and where there are still some challenges.

  • The emergence of software-defined technologies. Network, storage, compute, management and the data center can now have software-defined technologies abstracting the entire environment. The idea behind all of this is to allow the virtual layer to manage and control physical resources. Now, these technologies are absolute realities as a number of vendors are allowing you to simply point network, storage, compute or other resource to a virtual layer. This can span cloud computing and beyond. Basically, there’s really nothing stopping you from buying your own servers, loading them with a flash array and providing those resources to a software-defined storage controller. Congratulations! You now have commodity storage with a powerful logical management layer.
  • How hardware is changing. In the past, we were absolutely dependent on the hardware platform. Now, data is so agile that hardware is there simply for the resource and performance aspect. Redundancy and data replication allow organizations to hop between storage shelves and even entire blade chassis environments. The point is that the virtual layer is much more in charge than ever before. More organizations are able to purchase hardware and simply have their hypervisor manage it all. If your information is agile and you have a solid N+[insert your number here] methodology, why does it matter if your hardware is proprietary as long as your data stays safe?
  • More automation and orchestration. Open-source tools allow you to scale from your private data center and beyond. These automation tools simply ask you to point resources to it and allow the automation policies to run their course. Virtual hardware and software profiles are allowing administrators to re-provision entire stacks of hardware dynamically. This level of control was never available before.
  • How robotics help create commodity platforms. OK, so we’re not quite there yet. But we will be. There has been a lot of debate around the introduction of robotics into the data center platform. Many still argue that future data center models will have a lot more standardization and allow for more robotics to impact a now commoditized data center. Robotics and automation technologies can span extremely complex designs to simple data center optimization techniques. For example, a recent TechWeekEurope article discusses how IBM is plotting the temperature patterns in data centers to improve their energy efficiency, using robots based on an iRobot Roomba base.
  • Modular and custom built data centers. You can basically order your data centers to-go these days. Modular and custom built data center designs are allowing organizations to create the ideal compute model for their business. This can have proprietary systems or commodity platforms. The point is that more organizations are looking at smaller, modular platforms for better density and economics. Traditional data center models still have their place – but the interesting part is the growing diversity in modular and custom-built data center designs.

New technologies are creating a powerful new data center model. Both logical and physical changes are allowing a lot more diversity within the modern data center. This means organizations can deploy more logical controllers and allow the abstraction of vast hardware resources. But it’s not all simple. There are still some challenges around moving towards a commodity data center.

  • Asset management. What if a drive fails? What if your board goes bad? How do you replace an entire chassis? Who is in charge of that process? There are a lot of new questions that are added when you own the hardware. In some cases, commodity and custom-built systems also means that your organization is responsible for complete maintenance and hardware resolution issues. Unless you have a good control system in place, managing hundreds or even dozens of server platforms might not make a lot of sense.
  • Challenges around open-source technologies. Are you using an open-source hypervisor? Maybe an open-source cloud management tool? Are you ready to create your own scripts and policies? Unless you have a solid partner to work with or internal resources, managing and configuring open-source technologies isn’t always easy. Which brings us to the next point…
  • Lack of human resources. Commodity data centers require a new level of management and control. You have to have administrators ready to help support a much more virtualized environment. Policies, control mechanisms, and physical resources are all managed from a logical layer. How ready is your staff to take on that kind of challenge?
  • When technology capabilities surpass the business. You might be ready to adopt a commodity data center, but is your business? There needs to be complete alignment between the organization and IT entities within a business model. How are your users accessing applications? How are they receiving data? It’s critical to understand how a technological leap to a commodity data center can impact your business.

Not everyone can build their own servers or even data center platforms. Commodity systems are gaining traction but slowly. Still, as the data center continues to become the epicenter of all modern technologies, organizations will look for ways to optimize the delivery of content and resources. In some cases, you’ll see commodity systems. Most likely, for those organizations outside of Google, Facebook and Amazon, you’ll begin to see a new trend emerge. Hybrid commodity data centers will become a lot more popular as pieces of your architecture can be custom-built. The amazing piece here will be the virtual services capable of interconnecting both commodity and proprietary systems. Ultimately, this will mean more options for administrators, the data center and of course – your organization.

About the Author

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. His architecture work includes virtualization and cloud deployments as well as business network design and implementation. Currently, Bill works as the National Director of Strategy and Innovation at MTM Technologies, a Stamford, CT based consulting firm.

Add Your Comments

  • (will not be published)