HPE Introduces ‘Turnkey’ AI Data Center Solution With Nvidia

The company becomes the latest hardware maker to build new AI-focused systems for enterprises. Analysts chime in on the trend.

Wylie Wong, Regular Contributor

June 18, 2024

6 Min Read
HPE Discover event inside the Sphere

Hewlett Packard Enterprise (HPE) has partnered with Nvidia to build what the company describes as a “turnkey” AI private cloud solution that gives enterprises everything they need to quickly and easily deploy generative AI applications.

HPE Private Cloud AI integrates Nvidia’s GPUs, networking, and AI Enterprise software platform with HPE’s servers and storage, all managed through a centralized management layer in the HPE GreenLake cloud, HPE executives said today (June 18) at the HPE Discover 2024 conference in Las Vegas.

“Private cloud AI is ready to run out of the box,” said HPE CTO Fidelma Russo in a recent media briefing. “You plug it in. You connect it to the GreenLake cloud, and three clicks later… your data science and IT operations teams are up and running the Nvidia software stack.”

HPE also unveiled new AI-optimized servers featuring Nvidia’s latest GPUs and Superchips and support for Nvidia hardware and AI software in its OpsRamp cloud-based IT operations management tool. A new conversational assistant in OpsRamp allows IT teams to more easily monitor and manage their AI infrastructure, Russo said.  

Hardware Vendors Clamoring to Partner with Nvidia

With today’s announcements, HPE has strengthened the AI offerings within its GreenLake platform, the company’s on-premises solutions that are offered through a cloudlike, subscription-based model.

Related:HPE Wants to Build and Manage Your Private Cloud

The company has become the latest hardware vendor to unveil new data center solutions designed for AI workloads. Cisco last week announced plans for its own all-in-one AI data center solution in collaboration with Nvidia called Cisco Nexus HyperFabric AI clusters. Dell and Lenovo have also previously announced Nvidia-powered data center systems.

Hardware vendors are collaborating with Nvidia on AI data center solutions due to strong customer demand, said Peter Rutten, research vice president within IDC’s worldwide infrastructure research organization.

Nvidia dominates the AI market with its GPUs and software ecosystem, which includes Nvidia AI Enterprise, a software suite of AI tools, frameworks, and pre-trained models that make it easier for enterprises to develop and deploy AI workloads.

“Everybody is developing solutions with Nvidia. They have no choice,” Rutten told Data Center Knowledge. “Several vendors develop solutions with other GPU and accelerator vendors, but they say: ‘If we go to a customer, and don’t put Nvidia in front of them, they will walk away.’ There is a perception among end users in the market that AI equals Nvidia.”

Hot Market for AI Data Center Hardware

Hardware vendors are racing to compete in the fast-growing market for AI-optimized data center hardware as enterprises pursue their own generative AI initiatives to improve business workflows and operations, enhance customer service, and improve worker productivity. Enterprises need AI-optimized hardware because of AI’s compute-intensive requirements.

AI represents the next evolution of IT infrastructure because AI has different system, data, and privacy requirements than existing workloads, said Melanie Posey, research director of cloud and managed services transformation at S&P Global Market Intelligence.

“There is a lot of opportunity for everybody on the infrastructure side of this because organizations don’t necessarily have the infrastructure in their data centers right now that are going to support AI use cases,” she said.

While enterprises can use the public cloud for AI, a lot of enterprises will want to deploy generative AI on-premises for many of the same reasons they still have on-premises infrastructure: they have a lot of proprietary or sensitive data stored in-house, have data privacy or regulatory compliance concerns and are worried about the cost of running generative AI applications in the public cloud, Posey said.

“All the reasons they still have on-premises infrastructure are magnified when you start talking about AI,” she said.

Nvidia Computing by HPE

HPE Private Cloud AI is the key offering in a new portfolio of products that HPE has co-developed with Nvidia, which the companies call ‘Nvidia AI Computing by HPE.’

Private Cloud AI, which will be available this fall as a fully managed or self-managed solution, is a fully integrated infrastructure stack that includes HPE ProLiant Servers, HPE GreenLake for File Storage, and Nvidia Spectrum-X Ethernet networking, the company said.

The solution is designed for AI inferencing and retrieval augmented generation (RAG), which allows enterprises to use their own proprietary data for generative AI applications. It will also enable organizations to fine-tune the training of large language models, said Russo, who also serves as vice president and general manager of HPE’s hybrid cloud business unit.

HPE Private Cloud AI will come in four configurations to support AI workloads of all sizes, from small Nvidia L40S GPU systems to larger systems running H100 NVL Tensor Core GPUs and Nvidia GH200 NVL2 Grace Hopper Superchips.

“Each one is modular and allows you to expand or add capacity over time and maintain a consistent cloud-managed experience with the HPE GreenLake cloud,” Russo said.

On the software front, HPE Private Cloud AI will feature Nvidia AI Enterprise, which includes NIM microservices, a feature that simplifies deployment of generative AI. It also features HPE AI Essentials, a curated set of AI and data tools, and an embedded data lakehouse that will enable enterprises to unify and easily access structured and unstructured data stores on-premises or in the public cloud, Russo said.

The GreenLake cloud provides a private cloud control plane, a centralized management layer that allows enterprises to set up and manage their private cloud environment. It offers dashboards for monitoring and observability, and allows organizations to provision and manage their workloads, endpoints, and data across hybrid environments, the company said.

Can HPE Succeed with Private Cloud AI? Analysts Weigh In

Other hardware vendors have also developed turnkey AI data center solutions, said IDC’s Rutten. HPE’s new Private Cloud AI will be competitive in the market and be attractive to enterprises – particularly those with sophisticated, advanced users ready to deploy generative AI applications –  because it is a comprehensive solution, he said.

The solution includes security, sustainability notifications, consumption analytics, account user management, asset management, a wellness dashboard, AIOps and even HPE’s own virtualization capability, he said.

“I do think the market is ready for someone to combine all these different aspects of AI development and deployment into one platform – and that will help them with selling this,” Rutten said of HPE.

Andy Thurai, vice president and principal analyst at Constellation Research, said most enterprises today are still experimenting with AI, so initial traction for HPE Private Cloud AI may not be great. But when enterprises mature their AI applications and are looking for an optimized, cost-efficient data center solution that offers the best total cost of ownership for their AI workloads, HPE could do well in the market, he said.

“HPE has good potential to succeed in that space when the time comes,” Thurai told Data Center Knowledge.

HPE’s New AI-Optimized Servers

HPE today also announced three new AI-optimized servers:

  • HPE ProLiant Compute DL384 Gen12, which will feature the Nvidia GH200 NVL2 superchip. It’s targeted for memory-intensive AI workloads, such as fine-tuning LLMs or deploying RAG.

  • HPE ProLiant Compute DL380a Gen 12, featuring up to eight Nvidia H200 NVL GPUs. It’s designed for LLM users that need the flexibility to scale generative AI workloads.

  • HPE Cray XD670, which will feature eight Nvidia H200 Tensor Core GPUs. It’s targeted at LLM builders and AI service providers that need high performance for large AI model training and tuning.

The Cray system will be available this summer, while the two ProLiant systems will be available this fall. HPE will also support the Nvidia GB200 Grace Blackwell Superchip and Nvidia Blackwell GPUs in the future. Select models will feature direct liquid cooling, the company said.

HPE also announced that HPE GreenLake for File Storage has achieved Nvidia DGX BasePOD certification and Nvidia OVX storage validation, providing enterprises with the file storage solution they need for generative AI and GPU-intensive workloads, the company said.

Read more about:

NvidiaChip Watch

About the Author(s)

Wylie Wong

Regular Contributor

Wylie Wong is a journalist and freelance writer specializing in technology, business and sports. He previously worked at CNET, Computerworld and CRN and loves covering and learning about the advances and ever-changing dynamics of the technology industry. On the sports front, Wylie is co-author of Giants: Where Have You Gone, a where-are-they-now book on former San Francisco Giants. He previously launched and wrote a Giants blog for the San Jose Mercury News, and in recent years, has enjoyed writing about the intersection of technology and sports.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like