Skip navigation
 A detailed view of the harware in the CERN data center Getty Images

Three Reasons Why Hardware Matters in Software-Defined Storage

A dangerous idea has emerged in the industry at large – the idea that the hardware doesn’t matter.

Phil StrawPhil Straw

Software-defined infrastructure is among the most noteworthy advances in data center technology, providing new levels of flexibility in scale-out data infrastructures. Decoupling hardware and software has enabled a level of freedom that was previously unavailable and seeded a scaling revolution that continues to this day.

From this revolution, many software-defined storage (SDS) solutions were born. Vendors worked to build storage software to simplify storage management with hardware-agnostic solutions, empowering a “lego effect” that has allowed organizations to scale up and down as needed. Hardware agnostic storage, independent of proprietary hardware delivering unlimited scalability, greater efficiency, freedom, and mobility in the data center; It’s a great vision, but is it the reality?

Missing the Mark

In truth, the reality doesn’t match the vision and increasingly, data center managers are feeling the drag. While SDS has provided a great many benefits, the truth is that it hasn’t been a panacea. Vendor lock-in is still largely the reality in the SDS ecosystem. Data that has landed is increasingly expensive to move, and switching between proprietary solutions is cost-prohibitive.

What’s more, a dangerous idea has emerged in the industry at large – the idea that the hardware doesn’t matter. This idea has stalled innovation and created a race to the bottom, with vendors looking at hardware as a place to cut corners for margins, relying on opaque commercial-off-the-shelf (COTS) solutions to deliver increasingly sophisticated storage software. The idea is that the software storage vendors can optimize their packages to eliminate inefficiencies present in the one-size-fits-all COTS hardware. But the dirty secret is that they can’t – and especially not at scale.

Not everything can be solved in software. Clever workarounds that “optimize” a system might be good for a one-off release that will get an organization's customers a working solution, but the truth is that storage vendors can’t code physics out of existence – and for every bottleneck you try to code around, you could end up coding-in more power draw and more heat. This creates more demand for cooling, which in turn means the additional requirement for even more power and more space. The truth is that inefficiencies in these systems end up creating a vicious cycle of waste that organizations can’t easily escape from.

Three Reasons Why Hardware Matters

The real truth is that the first rule of data infrastructure is this: hardware matters. This will become an increasingly apparent reality as so-called “core-to-edge data infrastructure” – the shift to building more infrastructure outside of the hyperscale data center – matures.

There are three key reasons why:

1. COTS-based systems aren’t optimal for edge deployments.

We put this notion to the test years ago working with Australian special forces, trying to create systems that could be used to collect sensitive data in extreme environments like the Mariana Trench. We found out quickly that real-time data infrastructures run headlong into the reality of physics. High-performance, low latency infrastructure necessitates that it is placed close to where the data is being created and used – and in edge use cases, space will always be a constraint. One-size-fits-all COTS-based systems are fundamentally inefficient for edge deployments. It’s a vicious cycle: space constraints combined with inefficiencies created between the hardware and software lead to overheating. This, of course, creates a need for additional cooling, which requires additional real estate. Much innovation ends up going into the cooling infrastructure – hoses, liquid, cabinets, immersion – all of which take power and space. Wouldn’t it make more sense to build cooler operating hardware that is optimized for the software it’s running?

2. COTS-based supply chains are opaque and increasingly unreliable.

Virtually every country in the world is reliant on foreign manufactured chips and sub-assemblies – most of the componentry coming from southeast Asia. This has created dependencies that have become unavoidable, creating both economic as well as security challenges. These challenges become exaggerated during uncontrollable global events – which the world is all too familiar with now in the post-COVID era, where chip shortages and weak global supply chains have become common.

But aside from these challenges, the industry is facing a great contradiction as the rise of so-called “Zero Trust” security models take root in enterprises and government agencies around the globe. Zero trust is required because most vendors ask their customers to trust their black box designs. In a world where the entire value chain – from design, through sourcing, manufacturing, and delivery – is entirely transparent, you no longer have to trust. This is the purest form of zero trust. The reality is that COTS-based hardware systems, at least as they currently exist, eliminate the ability for sovereign resilience or for mission-critical infrastructures to have the secure provenance that is possible through transparent audit.

3. COTS-based systems sabotage sustainability.

The unfortunate reality is that software-defined infrastructure, while being a very good idea, has led to software bloat and an innovation malaise that has become increasingly detrimental to carbon reduction goals, especially as systems scale. A tremendous amount of waste exists in the current IT manufacturing ecosystem, making it difficult for organizations with large amounts of data to reduce their carbon footprints while keeping pace with growth. Instead of innovating hardware to be more efficient, IT solutions companies have thrown more processing power at I/O problems, and work on “outside-in” approaches like attempts at software optimization. The result is inefficient, power-draining, heat-producing, overly expensive architectures that create as many problems as they solve in fast-growing data centers. Energy reduction and the achievement of carbon footprint goals end up being clever exercises in greenwashing numbers rather than actually innovating in the data center.

Taking Back Control

This is not an argument against software-defined infrastructure in the least. The issue is that the industry has practically discarded the importance of hardware in the quest to sell cheap systems at top prices. This has created exactly the opposite of what the software-defined ethos is fundamentally trying to achieve. Ask yourself this question: who benefits more from software-defined infrastructure in your own racks: you or the vendor whose name is stamped on it?

Solving this comes down to adding a little more rigor to the purchasing and acquisition process. IT architects need to start asking questions that impact their organization's future – especially in mission-critical systems. Does our SDS solution really allow us to scale at the Edge? Does it enable us to switch our suppliers with relative ease? If we’re applying zero trust principles to the networks, is that being extended to the scrutiny of the hardware? Where is the hardware manufactured? Who assembled it? Can we prove the provenance of every component? Could we audit the source code if we wanted to? Can we scale without destroying our carbon reduction goals, or requiring new real estate to do it?

Questions like these will help organizations keep the industry on a better path – a path that responds to what the customers really need in a more holistic way. The software-defined paradigm has helped revolutionize the data center and especially scalable storage, but it’s important that leaders remember that hardware still matters. This will only become increasingly apparent as Edge strategies become more dominant and core data center scalability reaches its physical limits. When IT leaders start turning over more rocks looking for innovation at the hardware level, that’s when the true value of software-defined infrastructure will be found.


Phil Straw is the CEO of SoftIron. The views expressed in this opinion do not necessarily reflect the views or positions of Data Center Knowledge or Informa Tech.

TAGS: Hardware
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish