Skip navigation
HPE The Machine prototype HPE
A prototype of The Machine by HPE

HPE Rolls Out The Machine Prototype, Its Version of the Future of Computing

Single-address space 40-node cluster has 160 terabytes of memory

Hewlett Packard Enterprise unveiled its answer to the problems that may arise in the near future as the size of datasets that need to be analyzed outgrows the capabilities of the fastest processors. That answer is essentially a single massive pool of memory, with a single address space.

A prototype version of The Machine the company introduced Tuesday has 160 terabytes of memory across 40 physical nodes interconnected by a high-performance fabric.

Importantly, The Machine is not powered by Intel processors, which dominate the data center and high-performance computing market, but by an ARM System-on-Chip designed by Cavium. While nobody can predict the future of computing, Intel’s role in HPE’s version of that future appears to be a lot smaller than it is today.

The core idea behind HPE’s new architecture – “the largest R&D program in the history of the company” – is replacing focus on the processor with focus on memory. From the press release:

“By eliminating the inefficiencies of how memory, storage and processors interact in traditional systems today, Memory-Driven Computing reduces the time needed to process complex problems from days to hours, hours to minutes, minutes to seconds – to deliver real-time intelligence.”

The philosophy is similar to the one behind in-memory computing systems for Oracle or SAP HANA databases. Holding all the data in memory theoretically makes computing faster because data doesn’t have to be shuffled between storage and memory.

This means one of the biggest engineering challenges in creating these systems is designing the fabric that interconnects CPUs to memory in a way that avoids bottlenecks.

It’s a similar challenge to the one that faces engineers who work on another answer to the reportedly looming disconnect between data volume and processing muscle: offloading big processing jobs from the CPU to a big pool of GPUs.

The Machine prototype uses photonics to interconnect components.

HPE expects the size of shared memory to scale in the future “to a nearly limitless pool of memory – 4,096 yottabytes,” which is 250,000 times the size of all digital data that exists today, according to the company:

“With that amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles; and every data set from space exploration all at the same time.”

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish