Software-Defined Data Centers: What Lies Ahead?

The “software-defined” term, applied as a modifier to data center, networks or storage, is growing in popularity. Software-defined solutions, like virtualization did before them, can allow for a great deal of flexibility and efficiency of shared resources. The potential is great, but of course, there are risks, too.

The software-defined concept is not complex — one must simplify switching, transport, storage, and related infrastructure hardware, and then move “command and control” up to the application and services layer, according to Art Meierdirk, Senior Director Of Business Services, INOC. An industry veteran, who has more than 35 years of telecommunications and data communications experience, he will be moderating a panel titled, “Software Defined Data Centers – Next Steps for Storage and Networking” at the Orlando Data Center World in October.

Data Center Knowledge asked him a few questions about software-defined data centers (SDDC), including software defined storage (SDS) and software defined networks (SDN).

There are many flavors to the set-up, especially when it comes to who maintains the software-defined system. “The command and control (automated control software) can be maintained by one or more entities such as network, data center or cloud services providers, and applications or service providers, in addition to enterprise businesses,” he noted. “All control some or all aspects of a business solution.”

Software-defined data center advantages and drawbacks

Meierdirk said, “The software-defined data center is an environment in which all infrastructure is virtualized and delivered as a service and the control of this datacenter is entirely automated by software.” There are multiple advantages to using this approach.

He outlined unprecedented capabilities for business services, such as:

  • It’s a more effective use of the IT / data center infrastructure, thus reducing costs.
  • Provides flexibility, bringing rapid deployment of new services with a shorter time to value.
  • Standards-based architecture to avoid single Original Equipment Manufacturers (OEM) limitations.
  • Provides a redundant, distributed and diverse architecture for business continuity.
  • Provides businesses with more control of all infrastructure, including interconnectivity.
  • More complete integration of network, facilities and IT infrastructures.

“Yet,” he cautioned, “We are looking at ‘bleeding edge’ opportunities, the balance between pushing new opportunities for business with the risk of failed deployments.”

Issues could include:

  • New security risks, may be over-stated, but the concern is to be addressed
  • Proprietary implementation by Original Equipment Manufacturers (OEM) can delay standards implementations
  • Concern about standards and interoperability being in a state of flux, which can paralyze investments.
  • Considerable investment in legacy equipment leads to questions on how to manage a new deployment
  • Software-defined solutions and virtualization are very new and complex, there will be a significant learning curve for deployment and support.
  • For some regulated businesses, hands-off of network control and data storage / processing may not be allowed.

What lies ahead for this trend?

“Perhaps in the future, ‘business-focused virtualization’ could allow the enterprise business control its IT infrastructure, remote locations, data and computing as well as interactions with its other services providers and customers over a virtual solution for all aspects of its business,” Meierdirk said.

A comprehensive solution, such as this, requires a software-controlled solution, provided by either the enterprise itself, or another provider (such as a data center) as an overlay on top of a carrier transport (but switching and routing controlled by the enterprise).  “It is an exciting possibility and could be very fertile ground for an expansion of data center services,” he said.

In the future, Data Center Infrastructure Management (DCIM) offers several opportunities for use of software-control:

  • Capacity Monitoring and Management.
  • Power – using software to monitor and manage power utilization and balancing, as well as scheduling optional operations to match energy costs to best (peak v.s. non-peak) hours.
  • HVAC – using software to monitor and balance loads with the intent of reduced heating or cooling costs.
  • Hardware – turn up or down resources as needed.
  • Application / Performance management & optimization through the use of software, interacting with applications, to monitor performance and make network or application selections to improve performance.
  • Dynamic least cost configurations for storage, computing, access & transport by using software to compare and select best options in a dynamic environment.

Find out more on The Software Defined Data Center

Want to learn more? Attend the panel on “Software Defined Data Centers – Next Steps for Storage and Networking” at Orlando Data Center World or dive into any of the other 20 trends topical sessions curated by Data Center Knowledge at the event. Visit our previous post on cooling, Cooling Trends: Innovative Economization Increases ROI.

Check out the conference details and register at Orlando Data Center World conference page.

Get Daily Email News from DCK!
Subscribe now and get our special report, "The World's Most Unique Data Centers."

Enter your email to receive messages about offerings by Penton, its brands, affiliates and/or third-party partners, consistent with Penton's Privacy Policy.

About the Author

Colleen Miller is a journalist and social media specialist. She has more than two decades of writing and editing experience, with her most recent work dedicated to the online space. Colleen covers the data center industry, including topics such as modular, cloud and storage/big data.

Add Your Comments

  • (will not be published)