There are few better ways to make the statement Nvidia wanted to make when the chipmaker’s CEO, Jen-Hsun Huang, showed up last week at one non-profit’s San Francisco office bearing Nvidia’s latest and greatest supercomputer as a gift.
Nvidia wants the world to see it as the leading processor maker for artificial intelligence, and what better way to position itself that way than to have its hardware in the hands of an AI research team that’s at the top of its field?
The non-profit is called OpenAI. Founded by Elon Musk, it is the Tesla and SpaceX founder’s attempt to ensure an AI future that’s good for humanity is more likely than an AI future that’s really, really bad for us.
Nvidia has gifted OpenAI its new supercomputer, called DGX-1, because it is positioning the system as “the world’s first supercomputer dedicated to artificial intelligence.”
The timing of the announcement is notable. Huang delivered the system to OpenAI’s offices a week before Intel, Nvidia’s biggest competitor, whose processors power most of the world’s servers, kicked off its big annual conference in San Francisco called Intel Developer Forum.
At the conference, Intel would unveil plans to make the next-gen product in its Xeon Phi family the company’s first processor designed specifically for AI workloads. Xeon Phi is already the chief competitor to Nvidia’s GPUs in many of the world’s fastest supercomputers, and the battle for share of what promises to become a fast-growing market for AI hardware is on.
To draw a contrast between itself and Nvidia, Intel is positioning Knights Mill (code name for the next-gen Xeon Phi part) as a general-purpose processor companies can use to run machine learning workloads as well as other types of analytics applications. Machine learning is a field in artificial intelligence where the biggest players are spending a lot of R&D money.
Intel also claims that Xeon Phi enable AI systems that train faster and scale better than systems powered by GPUs. Ian Buck, VP of Nvidia’s Accelerated Computing unit, took issue with those claims, ripping into the benchmarks Intel used to make its case in a blog post.
Intel used an outdated deep learning model in its benchmarking and unfairly compared its current-generation Xeon Phi processors with older-gen Nvidia GPUs, Buck wrote (more details here). A single DGX-1, the system Huang gifted to OpenAI, is more than five times faster than four Xeon Phi servers, according to Buck.
Both Intel and Nvidia don’t just have each other to worry about in the AI hardware market. The biggest and most active companies in the field, Google and Facebook, haven’t been reluctant to get their hands dirty and design their own custom hardware when the market cannot address their needs. Google has designed its own processor for machine learning, called Tensor Processing Unit, and Facebook is planning to open source the design for Big Sur, its custom AI server. Big Sur, by the way, is powered by Nvidia GPUs.
It is the amount of money profit-driven corporations are investing in AI research that worries Musk, and it is why he started OpenAI. As he explained in a recent interview with Recode, a future where the most powerful AI is controlled by a few people can lead to nothing less than dictatorships replacing democracies. Google’s AI ambitions appear to be particularly worrisome to Musk, even though he avoided uttering the G-word in the interview.
A non-profit that does AI research aggressively and open sources the results is one way to ensure no one small group of people gets control of the most powerful AI and exploits it for its own selfish ends.
Here it is, in Musk’s own words: