Skip navigation

Persistent Memory vs. Computational Storage

There’s an alternative to persistent memory that provides a different approach to compute and storage locality: computational storage.

scott shadley picture (002).jpg

Scott Shadley is Principal Technologist for NGD Systems.

Recently, the market has seen some spirited debate around compute and storage locality. Should compute and storage be co-located for optimal performance? Or is location irrelevant?

There are many ways to approach the conversation, with different angles emerging from people who prioritize specific technologies. For example, the cloud, big data and edge computing each have unique needs that influence how someone deploying one of them would think about compute and storage locality.  

Overall, there are three ways compute and storage locality is being approached today:

  1. Status quo: It stays where it is in the host CPU, such as the case in AMD and some Intel chips.
  2. Accelerated: Compute is being offloaded to FPGA efforts, via Nvidia, Xilinx, and some Intel chips.
  3. Intelligent: Compute is moved directly to storage.

Persistent memory is a hot technology that has major implications for this debate. It’s a way to store data in which that data can still be accessed even after the process that created or last modified it has ended. Persistent memory is a storage-class memory, which ultimately means it is just the same old storage in a new wrapper at the end of the day.

That said, persistent memory is making major waves. There have been several summits launched around the technology, there are multiple vendors supporting it , and it’s being used in multiple media  types in the way of PCM, MRAM, RRAM, FeRAM and more. Persistent memory has been developed by moving these media products to the memory bus, replacing the DRAM we use today and leveraging protocols like DDR4 and eventually Gen Z. All of this has major impacts on the way systems are designed and used, and it ultimately necessitates a technology shift. What Persistent Memory means, essentially, is that storage must be moved closer to compute.

And while moving storage to compute is doable and useful when deployed correctly, it is expensive and requires a bit more ‘user interaction’ than the average architect is capable of providing (its much more complicated than simply plugging in the DIMM. Persistent memory products have just begun hitting the market - this work all started back on DDRII bus designs. It remains to be seen if Persistent Memory technology can be delivered at a cost that’ll allow broad adoption. Much of the hype results from the major names backing the technology. Many of these brands sell underutilized and overpriced platforms that are built around an archaic architecture from decades ago.

But there’s an alternative to persistent memory that provides a different approach to compute and storage locality: computational storage. Computational storage relies on a fundamentally new architecture in which compute moves down to storage, rather than storage moving up to compute. Moving compute to storage allows for new ways of managing data and reducing the required DRAM and CPU resources and expense. This is due to simple data movement. With computational storage, less data is moved around in the system so less work is required by the rest of the platform.

This changes the basic economics and infrastructure requirements associated with environments that contain huge volumes of data and perform heavy duty processing, such as hyperscale, content delivery network (CDN) and edge environments. Organizations that fall in these categories have spent decades using systems that require a CPU to be upgraded for it to handle fairly mundane tasks, like search, count, or other data related activities that are not generating real results. Then the work of these tasks is offloaded to a GPU, which adds costs, power and space. Computational storage provides those CPU resources back to the system for true mission critical work including analyzing the data, not sorting it, increasing efficiency and eliminating the need to purchase GPU and FPGA accelerators.

Like persistent memory, computational storage now comes in a variety of packages, including FPGA accelerated options; host CPU managed media and fully integrated, drop-in solutions. Computational storage technology has seen strong adoption over a relatively short time span. The buzz is gaining attention: In January, SNIA held a full-day symposium session dedicated to computational storage.

The debate over compute and storage locality – and the conversation about persistent memory vs. computational storage – illustrates that the industry is ready to re-evaluate old architectures that have been in use for decades. There’s room for different technologies - the best approach varies depending on the computing environment and an organization’s ultimate business goals. If you are running a small scale in-memory database, like Oracle, persistent memory can be the best option. But for any organization that prioritizes extreme scalability due to capacity growth and data-intensive workloads, computational storage is the ideal choice.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish