IBM's Leading Data Center Storage Line Gets All-Flash Upgrade

One more swift kick toward the exit door of history for rotating hard disk drives

Scott Fulton III, Contributor

January 12, 2017

5 Min Read
IBM's Leading Data Center Storage Line Gets All-Flash Upgrade
The new IBM DS8888F storage system (Photo: IBM)

Like the latter stages of a nicotine patch program, IBM has been steadily weaning data center storage off of rotating, ceramic hard drives and onto all-solid-state memory.  The next step in that program begins this morning, with IBM’s announcement of new models in its DS8880 data storage system series whose storage enclosures are being replaced with all-flash units.

The company is introducing Models DS8884F, DS8886F, and DS8888F, with the “F” representing the all-flash substitution, compared with their non-“F” counterparts.  But in a departure from its previous policy, as Levi Norman, director of IBM Enterprise Storage, stated in an interview with Data Center Knowledge, the company will be coupling the phrases “Business Class,” “Enterprise Class,” and “Analytic Class,” respectively, to these models.  Its intention, he tells us, is to speak more clearly to data center managers who are more involved with the procurement process today than ever before — more clearly than what he described as IBM’s traditional “alphanumeric soup of nomenclature.”

“When you look at the overall architecture,” stated Norman, “I think you can’t discount the software stack along with the CPU complex, along with the storage complex itself.  And when you bring all those things together, and they operate in harmony because they were designed to operate in harmony, you can actually get better response times out of these styles of architecture than you could with something that was completely in-memory.”

What the “F” Adds

The “F” editions build off of the original DS8884, DS8886, and DS8888 models introduced in October 2015, Norman told us.  The first two in that series were hybrid enclosures containing up to 1,536 HDD units, although the DS8888 was always all-flash.


The “Analytic Class” model DS8888F [pictured left], as with the non-“F” model, is based around the Power System E850 server: a 4U, 4-socket component enabling up to 48 cores clocked at 3.02 GHz, or 32 cores clocked at 3.72 GHz.  Its chassis supports up to 128 host ports, 2 TB of DRAM, and over 1.2 PB of solid-state storage on a maximum of 384 flash cards.  (IBM has stopped using the phrase “flash drives,” to draw a sharper distinction.)

By comparison, the “Enterprise Class” model DS8886F is based around the Power System S824, which enables up to 2 8-core 4.15 GHz processors, up to 2 6-core 3.89 GHz, or two 12-core processors clocked at 3.52 GHz.  The “Business Class” model DS8884F is based around Power System S822, whose maximum core count configuration is up to two 10-core 3.42 GHz processors, and whose highest clock speed option is up to two 8-core 4.15 GHz CPUs.  All three new models will continue to use the 8800-family rack enclosure as their non-“F” predecessors.

Is Analytics Storage Really That Different from Regular Storage?

It’s obvious that IBM has carved out three clear performance classes, in an adaptation of the classic “Good / Better / Best” marketing scheme that was a hallmark of the old RadioShack catalog.  But with analytics software makers directing their product pitches more toward smaller businesses, and with telcos and connectivity providers using “Business Class” to mean the upper speed tier, isn’t IBM worried that its efforts to address data center managers more directly might end up with a mismatch?

“The way that this product [line] is segmented,” Norman responded, “is because of the primary audience. . . those mission-critical-style clients that can’t afford any downtime in their business.  Those people are 19 of the top 20 banks, telcos that are easy to recognize, trading desks, healthcare institutions, big healthcare research environments.  When we’re talking to that audience with this product, it’s the higher-end — the ‘best’ end of the ‘good/better/best’ — that they tend to want.”

For that reason, IBM is also pushing its Analytic Class unit towards so-called “cognitive workloads” — which is a tricky message to craft, especially since it’s the other end of IBM that’s advancing the cause of all-DRAM architectures — as opposed to storage arrays — with its DB2 version 11.1.

“When you peel back the onion layers of cognitive to what it means,” IBM’s Norman told Data Center Knowledge, “underneath it, it behaves in an analytic manner.  I think the difference is, cognitive beyond analytics is meant to make sense of all the data that it pulls in, and then start to reason through it.  But underneath it, from an analysis perspective, it behaves like analytics.  You have to ingest massive amounts of data, make sense of it, get it back out of storage very quickly, and move it around very quickly, to get to a near-real-time answer that satisfied the question that was posed.”

Continuity as a Metric

Since the DS8800 series’ introduction in 2015, Norman has characterized its architecture as enabling “non-stop availability” of stored data, and he repeated that claim for us this time.  But with systems such as Hadoop enabling high availability by way of resilience techniques and high redundancy — essentially assuming the underlying hardware to be unreliable and subject to failure — what is the real business value of consolidating all that storage into one node and applying a premium?

Norman responded by saying IBM customers apply a higher-order metric, demanding that their storage hardware endow their systems with what he described as business continuity.  So we asked him, how many bare-bones, Open Compute Project-style, x86 servers would it take to provide a data center with the same level of business continuity as one IBM DS8888F?

For clarity, Norman passed on our question to an IBM technical team, who crafted this answer for us:

“It is a combination of availability and performance that we sell.  Hardware will always fail and it depends on what your definition of availability really is.

“We have sub-millisecond response times,” the IBM technical team continued, “and in the event of a hardware fail, we maintain that response time after failover in the individual system.  Error recovery is typically less than 6 seconds. In a replication case then you are talking about hyperswap failover times, usually tens of seconds or for longer distance asynchronous replication the RPO [recovery point objective] is in the order of minutes or more, depending on the client implementation.  X86 single system error recovery can take may times the 6 seconds; [and] some are in the minutes range.  So it is an apples-to-oranges comparison.”

Perhaps so.  But IBM has just upgraded its lineup of oranges, so to speak, and is pitching them to customers with a room full of apples (small “a”).  So such comparisons are bound to be made.

About the Author(s)

Scott Fulton III


Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like