DeepSeek’s AI models are providing a competitive edge to Chinese chipmakers, including Huawei, helping them compete with U.S. processors despite the ongoing export restrictions on advanced chips. The rise of DeepSeek’s artificial intelligence (AI) models has allowed companies like Huawei to better challenge the dominance of U.S. firms, particularly Nvidia, in the domestic AI chip market.
For years, Huawei and other Chinese chipmakers struggled to match Nvidia’s top-tier processors, which excel in training AI models by processing vast amounts of data to help algorithms make precise decisions. However, DeepSeek’s models focus on the “inference” stage, where AI models draw conclusions and make predictions based on prior training, optimizing computational efficiency rather than relying on raw processing power alone.
Analysts believe this shift toward inference-focused models may help narrow the gap between Chinese-made AI processors and their more powerful U.S. counterparts, making them more competitive.
Huawei and other Chinese AI chipmakers, such as Hygon, Tencent-backed EnFlame, Tsingmicro, and Moore Threads, have recently stated that their products will support DeepSeek’s models, though few details have been shared. Huawei declined to comment, and Moore Threads, Hygon, EnFlame, and Tsingmicro did not respond to Reuters’ requests for further clarification.
DeepSeek’s open-source nature and its low fees are expected to drive AI adoption and accelerate real-world applications, aiding Chinese firms in overcoming U.S. export restrictions on their high-performance chips. Even before DeepSeek gained widespread attention, products like Huawei’s Ascend 910B had already been seen as more suited for less computationally demanding inference tasks. This includes tasks like powering chatbots or making predictions using trained AI models.
In China, many companies across various industries—from automakers to telecoms—are integrating DeepSeek’s models into their products and operations.
“This development aligns well with the capabilities of Chinese AI chipset vendors,” said Lian Jye Su, a chief analyst at Omdia. “Chinese AI chipsets may struggle to compete with Nvidia’s GPUs in AI training, but AI inference workloads are more forgiving and can benefit from local and industry-specific understanding.”
Nvidia Still Maintains Dominance
Despite the progress made by Chinese chipmakers, Nvidia continues to dominate the AI chip market. Bernstein analyst Lin Qingyuan notes that while Chinese AI chips are competitive in inference tasks, they are still limited to the Chinese market, with Nvidia chips outperforming them even in inference workloads.
While U.S. export restrictions prevent Nvidia from selling its most advanced AI training chips to China, the company is still allowed to supply less powerful chips suitable for inference tasks. Nvidia recently published a blog post arguing that its chips are essential to enhancing the utility of DeepSeek and other “reasoning” models.
Beyond raw computing power, Nvidia’s CUDA platform, which allows software developers to use Nvidia GPUs for general computing tasks—not just AI or graphics—is a key factor in the company’s continued dominance. Many Chinese AI chip companies have not directly challenged Nvidia’s CUDA but instead claim compatibility with it.
Huawei has been particularly aggressive in its efforts to break free from Nvidia’s influence by offering a CUDA equivalent called Compute Architecture for Neural Networks (CANN). However, experts point out that Huawei faces challenges in convincing developers to abandon CUDA, given its extensive software library and capabilities, which require significant long-term investment.
“Software performance from Chinese AI chip firms remains lacking at this stage,” said Omdia’s Su. “CUDA has a rich library and diverse software capabilities, which are difficult to replicate without considerable time and resources.”

