Skip navigation
Nvidia signage Alamy

Nvidia CEO Shares Vision for Overhauling Data Centers

Nvidia saw its shares soar last week after CEO Jensen Huang said he envisions a $1 trillion data center equipment overhaul thanks to accelerated computing and generative AI.

Nvidia experienced a remarkable gain of 24.7% last week, driven in part by a statement by CEO Jensen Huang the day before the big stock jump emphasizing the need to replace outdated data center equipment with new chips as more companies integrate AI.

"The computer industry is going through two simultaneous transitions — accelerated computing and generative AI," said Huang during Nvidia’s second-quarter earnings statement to investors on May 24. "A trillion dollars of installed global data center infrastructure will transition from general-purpose to accelerated computing as companies race to apply generative AI into every product, service, and business process."

Outdated Equipment Will Be Replaced: But at What Cost?

However, there are questions about the scale and cost of these replacements. While AI implementation may require some hardware upgrades or specialized processors for optimal performance, the cost and scale of such replacements depend on each company's specific requirements. Enterprises should evaluate the needs and costs on a case-by-case basis.

Bradley Shimmin, a data and AI industry analyst at technology research and advisory group Omdia, acknowledges the potential for companies to capitalize on the generative AI trend, which could demand new approaches to acceleration hardware. However, Shimmin does not fully endorse Huang's belief that data centers must replace all the equipment.

"For many use cases, especially those involving highly demanding model training requirements, companies will be looking to cut costs and speed time to market by investing in the latest and greatest AI hardware acceleration," Shimmin said. "However, there's a countering trend going on right now where researchers are learning how to do more with less model with fewer parameters, highly curated data sets, and smarter training/fine-tuning using PEFT [Parameter Efficient Fine-tuning] and LoRa, for example."

Data Center Financial Hurdles and Physical Limitations

Besides the physical limitations of data centers, any pursuit of transistor density improvements in data centers is not without hurdles. Building fabs comes with a high cost, especially when coupled with the escalating expenses for cutting-edge nodes. Data center leaders must navigate these financial concerns while striving to meet an ever-increasing demand for more advanced data center infrastructure.

As the data center industry continues to evolve, finding cost-effective solutions to achieve transistor density along with employee retention will be a crucial focus for data center operators.

Expanding Ecosystems and Chip Architectures

Chip manufacturers are also rushing to support generative AI use cases on smaller target platforms like Samsung's efforts to run full-scale models on-chip and in-phone, Shimmin pointed out. This indicates that the overall ecosystem will expand across various chip types and deployment configurations, including back-end training, and on-edge or in-device inference. Multiple chip architectures, such as RISC-V, FPGA, GPUs, and specialized solutions like AWS Trainium and Inferentium, will play significant roles in this evolving landscape.

"It's easy to see that the overall ecosystem is going to explode," Shimmin said.

AI has become a focus of investors and data center infrastructure management due to increasing demands of scale from AI. This is due to the runaway success of OpenAI's various GPT models. 

But creating powerful language or image models is something only a few companies can do. In the past, it was possible to see significant improvements by smaller scale models to work on data center-sized systems. To keep pushing the boundaries of technology, companies will have to invest in better and more advanced hardware, giving a lot of credence to Huang's statement.

Karl Freund, founder, and principal analyst at Cambrian-AI Research, said in a statement to Data Center Knowledge that he would never bet on Jensen being wrong.

"He is a visionary unmatched," Freund said. "Jensen has been saying for years that the data center would be accelerated, and that is happening. Based on the processor, the GPU segment accounted for the maximum revenue share of 46.1% in 2021."

Nvidia investors, however, may want to temper their expectation of a continual earnings rally. The implication is that scaling has already stalled and will soon plateau. While AI implementation may require hardware upgrades or specialized processors for optimal performance, the extent of replacements will likely vary across companies. As the technology ecosystem evolves, optimizations and advancements in AI models are expected to offer alternative solutions that balance hardware demands.

Sam Altman, OpenAI's CEO who recently asked Congress to consider AI regulatory proposals said further AI progress will not come from making models bigger.

"I think we're at the end of the era where it's going to be these, like, giant, giant models," Altman told an audience at an event held at MIT in early April as reported by Wired. "We'll make them better in other ways."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish