A smartphone with a displayed NVIDIA logo is placed on a computer motherboard in this illustration taken March 6, 2023. REUTERS/Dado Ruvic/Illustration

By Stephen Nellis

SAN FRANCISCO, Dec 3 (Reuters) - Nvidia on Wednesday published new data showing that its latest artificial intelligence server can improve the performance of new models - including two popular ones from China - by 10 times.

The data comes as the AI world has shifted its focus from training AI models, where Nvidia dominates the market, to putting them to use for millions of users, where Nvidia faces far more competition from rivals such as Advanced Micro Devices and Cerebras.

Nvidia's data focused on what are known as mixture-of-expert AI models. The technique is a way of making AI models more efficient by breaking up questions into pieces that are assigned to "experts" within the model. That exploded in popularity this year after China's DeepSeek shocked the world with a high-performing open source model that took less training on Nvidia chips than rivals in early 2025.

Since then, the mixture-of-experts approach has been adopted by ChatGPT maker OpenAI, France's Mistral and China's Moonshoot AI, which in July released a highly-ranked open source model of its own.

Meanwhile, Nvidia has focused on making the case that while such models might require less training on its chips, its offerings can still be used to serve those models to users.

Nvidia on Wednesday said that its latest AI server, which packs 72 of its leading chips into a single computer with speedy links between them, improved the performance of Moonshot's Kimi K2 Thinking model by 10 times compared to the previous generation of Nvidia servers, a similar performance gain to what Nvidia has seen with DeepSeek's models.

Nvidia said the gains primarily came from the sheer number of chips it can pack into servers and the fast links between them, an area where Nvidia still has advantages over its rivals.

Nvidia competitor AMD is working on a similar server packed with multiple powerful chips that it has said will come to market next year.

(Reporting by Stephen Nellis in San Francisco, editing by Deepa Babington)