Skip navigation
Inside an Equinix data center in Pantin, France MARTIN BUREAU/AFP/Getty Images
Inside an Equinix data center in Pantin, France

What Switching to NVMe Means for the Data Center

Going hand-in-hand with the switch to SSDs, NVMe will drive fundamental system and application changes.

Later this year, memory specialist Kingston Technologies will add enterprise NVMe products to its new line of SATA-based SSDs for servers and storage arrays in tier-two cloud providers’ data centers.

Tempting as it might be to swap hard drives for SSDs without changing your storage and networking architecture, that’s not the way to get the most out of flash storage. Architectural changes are necessary to match the changing workloads. Plus, if you want to switch all storage tiers to flash in the future, your future self will thank your present self for making those architectural changes now.

SATA has had a good run. “As legacy as SATA is and as stale from a technology standpoint, it gets the job done for most workloads,” Cameron Crandall, enterprise SSD product manager at Kingston told Data Center Knowledge in an interview. “It has been refined, which has extended its life in the data center.”

Over the seven years that Kingston has been producing 6TB SATA SSDs, improvements in controller technology have taken IOPS from 20,000 IOPS to the current peak of around 100,000 IOPS. Those performance increases meant most tier-two cloud companies weren’t looking for faster interfaces, such as NVMe. But “now we can't get faster bandwidth performance, because we're saturated at 600Mbps bandwidth [for SATA III],” he said.

For the web serving and ecommerce hosting workloads that run in those data centers, SATA has continued to deliver. “They vary in their read-write mix, but SATA gets it done,” Crandall said, estimating that about 95 percent of the SSDS deployed in the tier-two cloud space are still SATA.

But as they add new cloud workloads to their portfolios, they start needing NVMe to deliver the necessary performance, which is why Kingston will be bringing out NVMe storage this fall. Customers want better performance on their data sets in the cloud, and many providers are going after business they say they can do better than AWS or Azure. NVMe can give them that next performance boost, according to Crandall.

But, this isn’t just a matter of choosing a different component, he warned.

One of the biggest changes is implementing redundancy in software instead of hardware, since moving to NVMe meains removing the hardware RAID controller. “You use software to replicate your data,” Crandall said. Customers are “tied to those traditional redundancy practices, but NVMe was really designed to connect directly to the processor and get the storage host controller out of the bus, so to speak, because it slows the drives down.”

Transition Will Be Gradual

The complexity of that change means that tier-two data centers will move to NVMe gradually, he predicted. You’ll start seeing small amounts of NVMe introduced over the next two years or so, replacing SATA backplanes, “but it's just not going to happen overnight.”

Price sensitivity will also keep the switch slow. Tier-two data centers define their own hardware, sourcing commercially available off-the-shelf components to build their systems, buying white box solutions or buying from the big vendors, where they’re able to specify the hardware that goes into the systems. “Their strategy is to keep their hardware costs as low as possible, just like the tier-one guys.”

In the past, some data centers have used SSDs designed for laptops and desktops to save money. “Our high-end client drive was our best-selling enterprise SSD at one point,” Crandall admitted, but that’s risky. Instead, Kingston plans to keep prices down by providing drives for those older workloads that don’t need high endurance.

Most SSDs running in data centers perform one or less than one write per drive per day, he said. Most of the storage pool is “read-centric,” so low-cost SSDs will be in higher demand. “They don't have to spend $1 per gigabyte across their entire storage infrastructure.” A majority-SATA storage pool, with some NVMe in the mix, will be good enough for most of their customers.

For the CDN market, Kingston will offer a drive rated for just a third of a write a day. “A lot of CDN applications are very read-intensive, and they’re going to want the lowest cost possible for the SSD, but they still want to be buying an SSD classified as a data center SSD.”

Applications Will Need to Be Rewritten

The long-term changes will be fundamental, Alex McDonald, co-chair of the SNIA Solid State Storage Initiative, told us. Form factors will move away from traditional drives to the half and full-length storage ‘ruler’ formats introduced by NVMe. PCIe will replace SATA to deliver extreme density at low power. “It’s not unknown to fit a petabyte into a single 19-inch rack,” he suggested. “At the end of the day, I think SATA is dead.”

Increased density will also require investments in your networking architecture, McDonald warned. “Networking tends to be an afterthought when it comes to storage. A lot of people think SSD will fix their throughput problem if they have a slow system, but 10x the bandwidth doesn’t mean things go 10x faster. What they forget is that with a good-quality high-end SSD you can saturate the network with three or four drives rather than three or four hundred.” With fewer SSDs to deliver the same capacity, you can’t spread writes over as many devices. Replace hundreds of disk drives in an array with SSD, and you’ll find your network is likely inadequate. “It would just look like bunch of drives in performance because the bottleneck is the movement of the data in bulk.”

If you don’t already have cloud-style workloads, McDonald advised making longer-term plans, including rewriting your applications to move from monolithic code to microservices and more distributed systems. “There is a good economic case to take those first steps into SSD; but when people are looking at SSDs, they should be looking not to improve the throughput or the responsiveness of an application but to give themselves the ability to redesign the application to take advantage of these new technologies.”

Flash Will Go Cold

Server purchasing trends are making SSDs the predominant storage technology in all tiers of data centers, Todd Traver, VP of IT optimization and strategy at the Uptime Institute, told Data Center Knowledge. “We are seeing almost all new servers being purchased or leased have SSDs.”

“Probably half the sales in the industry are flash drives as opposed to spinning ones,” said McDonald. “The cost per bit of SSDs has fallen dramatically in the last couple of years, to the point where you're now paying for SSD what you were paying for spinning disks less than 18 months ago. It's not ridiculously cheap, but it is cheap, and people should be looking at it, and they are.”

But he also warned that current pricing trends are unlikely to continue, because they’ve been caused by overproduction: “There is a shortage of DRAM and a glut of NAND; that will probably go into reverse, and there will be a NAND shortage and DRAM glut.”

Drive density also isn’t going to go up as quickly as it has. “We’re not at the limit on density, but we’re getting close to it,” McDonald said. For current interface technologies, he predicted that 128TB SSD would be the likely maximum.

The power advantage of SSD is also shrinking. High-end SSDs draw similar amounts of power to hard drives, because even when data isn’t actively being read or written, the drive is reorganizing or flushing the cache to the physical medium.

Even so, SSD will continue to take on more roles in the data center. Mainstream data centers aren’t yet looking at persistent memory like Intel’s Optane, Crandall told us. But as those technologies emerge to take on the role of cache in the storage architecture, McDonald predicted that SSDs will become the primary storage location for both hot and cold data, because the economics are becoming so compelling, especially for computational storage devices with built-in compression.

“Right now, tier-two data centers are stuck with very large investments in disk drives that are storing long-term retained data,” he said. “In the long term, I see disk going the way of tape, as a niche speciality used for really long-term archiving, with SSD for cold storage. When it’s not doing anything, SSD isn’t consuming any electricity. We’ve had technologies for disks where you could spin drives down or turn them off but being electromechanical devices, disks don't like being turned on and off, they don't like physical shock. SSD is far more robust for long-term retention and will become cheap and useful for long-term data storage.”

“There is still a bit-rot factor for SSDs, but they have a lower bit-rot figure than tape: you have to sling a tape in the bin when it’s seven or ten years old. SSD is not as temperature-sensitive, it’s not vibration-sensitive in the way a hard drive is.” SSDs will start to include error correction to improve longevity, which will also help.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish