Nvidia's Fermi GPU Targets the HPC Market

Add Your Comments

Last week Nvidia Corp. introduced its next generation graphics processing unit (GPU) architecture, which is codenamed “Fermi” and optimized for high performance computing. The new GPU will be used in a new supercomputer for the Oak Ridge National Laboratory, and is earning strong reviews from industry veterans. “NVIDIA and the Fermi team have taken a giant step towards making GPUs attractive for a broader class of programs,” said Dave Patterson, director Parallel Computing Research Laboratory, U.C. Berkeley. “I believe history will record Fermi as a significant milestone.”

The Fermi GPU architecture has been a hot topic in the HPC sector. Here’s a roundup of some of the coverage:

  • HPC Wire: “GPU Computing 2.0 is upon us,” writes HPC Wire editor Martin Feldman, who calls Fermi “the biggest step forward for general-purpose GPU computing since the introduction of CUDA in 2006. The stated goal behind the new architecture is two-fold: to significantly boost GPU computing performance and to expand the application range of the graphics processor.”
  • Inside HPC: “I’ve been attending technical conferences from coast to coast for a number of years and never have I experienced the electricity that I felt here,” writes John Leidel. “The hallways were abuzz with all sorts of burgeoning ideas on how to retask the GPU with new work.”
  • World Changing? “This is big stuff, important stuff, world changing stuff, and I’m having trouble wrapping my mind around it and this is something I’m supposed to be good at,” writes analyst Rob Enderle at TG Daily. “They are talking about supercomputing for the masses like we used to talk about computing for the masses before Steve Jobs and the Woz ever created the Apple II.”
  • The Tech Report, in a detailed review, notes that “Fermi has a number of computing features never before seen in a GPU, features that should enable new applications for GPU computing and, Nvidia hopes, open up new markets for its GeForce and Tesla products.”
  • But Was It Real? Several tech sites noted aspects of the chip shown during the GPU Conference keynote that suggested it wasn’t a working prototype. The company later confirmed that the board had been an “engineering protype” but insisted that the GPUs really exist and powered the demo.

About the Author

Rich Miller is the founder and editor at large of Data Center Knowledge, and has been reporting on the data center sector since 2000. He has tracked the growing impact of high-density computing on the power and cooling of data centers, and the resulting push for improved energy efficiency in these facilities.

Add Your Comments

  • (will not be published)