Chipmaker Nvidia’s ascendance has been the stuff of headlines for months now. The company founded by Taiwanese-American electrical engineer Jensen Huang doubled its profits this year thanks to strong sales of computer chips tailored to the development of artificial intelligence, and just over a month ago, it was announced that Nvidia will replace its lagging competitor, Intel, on the Dow Jones industrial average. While Nvidia sits head and shoulders above the competition as the go-to for chips that can perform complex A.I. tasks (image and speech recognition, text generation for tools like ChatGPT) as well as accompanying software, the New York Times reports that its competitors are in a “furious contest” to peel off some market share with their own offerings.
The specific area in A.I. development where competitors see an opportunity to “dethrone” Nvidia, as the Times puts it, is called inferencing – a process that enables already-trained A.I. models to carry out tasks, like image and text generation – as they also seek to outdo Huang’s offerings with lower prices and power consumption. The ‘Gray Lady’ reports that the latest chips from AMD (which is run by Lisa Su, Huang’s first cousin, once removed) and Amazon, called MI300 and Trainium respectively, are capable of faster speeds at lower prices than Nvidia, providing the first real signs of alternatives. Amazon has dedicated $75 billion just this year for the development of A.I. chips and other hardware, and Google plans to sell services based on its sixth generation of chips developed in-house, called Trillium.
Other startups like SambaNova Systems, Groq, and Cerebras are claiming that they are capable of better inferencing than Nvidia, with lower prices and power consumption to boot. Market researchers project that data centers will buy $126 billion in non-Nvidia chips for A.I. purposes, an increase of 49% over last year.
Still, Huang does not appear to be sweating from the heat of competition. When it comes to energy consumption, he acknowledges that his company’s state-of-the-art Blackwell A.I. chips require more energy to operate, but that they can perform many more operations per watt of energy used than competitors. “Our total cost of ownership is so good,” he said to an audience at Stanford University earlier this year, “that even when competitor’s chips are free, it’s not cheap enough.” His potential clients don’t necessarily agree, as Dan Stanzione, the executive director of the Texas Advanced Computing Center, told the Times that his organization plans to buy SambaNova chips as well as a Blackwell-based supercomputer, as the latter is “just too expensive” and consumes more power. Established American tech companies like Amazon, Google, and Meta appear to be taking Stanzione’s path, using large machines powered by Nvidia chips alongside competitors like AMD and startups, while also investing in their own offerings.
Two weeks ago, Nvidia reported over $35 billion in quarterly revenue, up 94% from the previous year.