Ian King (Bloomberg) -- When Groq didn’t turn up to its coming out party -- last month’s AI Hardware Summit -- attendees worried that the secretive AI chip startup was in trouble. It was just busy.
“We weren’t going to go out there and just talk,’’ said Chief Executive Officer Jonathan Ross, a former Google chip executive. The company was jamming on a customer project and didn’t have time to prep a demo for the conference, even though it had sponsored the event.
When Groq does step into the limelight, it will show something surprising. The three-year-old startup is trying to upend semiconductor industry dogma by designing special, new chips that handle artificial intelligence, a market it expects to be worth tens of billions of dollars in coming years.
Chipmakers agree on the opportunity. AI is a powerful way to understand and cash in on the flood of information created by everything from smartphones to vehicles. Traditional chips aren’t up to the task, so powerhouses such as Intel Corp. are acquiring startups that design new types of processors.
Big customers, such as Google and Amazon.com Inc., are even designing their own AI chips. Indeed, Ross started Google’s most-successful effort in this area: the Tensor Processing Unit, or TPU, which powers a rising volume of the internet giant’s AI workloads.
At Groq, Ross is scrapping the industry’s main way to squeeze more processing power out of its products. Most companies add more cores to their chips because many smaller processors working together on one piece of silicon can carve up a problem and solve it quicker than one brawny chip. Groq is taking the opposite approach.
Instead of seeking to improve on existing designs, Groq started with the software then created a programmable chip that’s relatively simple but fast. It has a giant amount of memory to store information, and rows of circuits that manipulate the required numbers and not a lot else.
An accompanying compiler -- software that turns computer programs into instructions that the chip can execute -- is the secret sauce. It allows multiple machine-learning models to be loaded and run in a way that yields instant results, Ross said.
Machine learning is one of the most-popular and powerful forms of AI. Groq’s chip and software are aimed at the inference part of machine learning. Once different computers have trained a software model by bombarding it with data, the resulting program is run on different computers to rapidly react to real-world inputs -- to infer in other words.
Examples include recognizing an object in a photo and voice recognition. Armed with a Groq chip, a computer will be able to instantly recognize the language spoken and switch to the right machine-learning model to translate the words. Groq’s chips can have lots of these models stored, making them really quick.
The company thinks its chips will appeal to operators of giant data centers. The components deliver predictable results in terms of the power they require and the time it takes to produce results. That should help data center operators plan better and save electricity. These are also important features for self-driving cars where power is limited.
In a way, Ross is trying to replicate what he achieved at Google, building an in-house chip effort that sparks an industrywide move to new technology. This time though it’s a commercial effort that pits him against some of the biggest names in the $470-billion chip industry, as well as a raft of newcomers with their own novel solutions.
The Mountain View, California-based company has about 70 employees -- less than a quarter of the engineers that a large chipmaker such as Intel would have designing a chip. Ross said Groq isn’t looking to be acquired. Instead, he hopes to secure a small number of customers who will deploy Groq chips widely and provide more than enough income for the company to thrive independently. The startup has begun sending samples out.
"It’s like elephant hunting," he said. "You only need a handful to sustain you, especially when we’re as lean as we are."