Young customer buying clothes on online boutique app, looking at trendy clothing line on mobile phone website. Female buyer choosing fashion items on e commerce application in shop.
Image: DC_Studio/Envato Elements

Alibaba cofounder and philanthropist Jack Ma’s Ant Group has reportedly achieved a breakthrough in AI model training techniques by integrating Chinese-made semiconductors, a move that could cut computing costs by 20%. Bloomberg reports that Hangzhou-based Ant Group used domestic Chinese chips from Alibaba and Huawei to train models with the Mixture of Experts (MoE) machine learning approach, which allows models to be trained with much less compute.

MoE training combines both Chinese and U.S.-made semiconductors, helping reduce computing costs while limiting reliance on major single-chip suppliers like NVIDIA. Sources familiar with the matter said Ant Group achieved results comparable to those produced using NVIDIA H800 chips, though they requested anonymity as the information is not yet public.

SEE: New World’s Smallest Supercomputer: Pre-Order NVIDIA’s DGX Spark Today

Shift away from NVIDIA amid export controls

Although Ant Group is still using NVIDIA chips, the company is increasingly relying on alternative semiconductors for its latest MoE models. This shift indicates the company’s position in the ongoing AI race between U.S. and Chinese companies and illustrates how Chinese developers can create models without exclusive dependence on U.S.-based companies. This is particularly significant as the H800 chip is currently restricted under U.S. export controls as part of Washington’s efforts to restrict China’s access to cutting-edge hardware critical for AI development.

Ant Group recently published a research paper claiming its models have, at times, outperformed Meta’s based on internal benchmark tests. The company also suggested its model strategy could lower the cost of inferencing — the process of delivering real-time AI services — and make advanced capabilities more affordable. If true, Ant Group’s cost-efficient training techniques could mark a significant milestone in China’s artificial intelligence development strategy.

SEE: DeepSeek Locked Down Public Database Access That Exposed Chat History

Rival startups and broader industry implications

Ant Group is not alone in this push. Chinese startup DeepSeek released its R1 AI model earlier this year, contributing to growing momentum around the idea that powerful AI models can be trained at lower cost. Ant Group has also open-sourced its Ling models, Ling-Lite and Ling-Plus, further encouraging AI development across the region.

MoE models are rapidly becoming a preferred approach in AI training. This technique divides tasks into smaller datasets, optimizing performance and efficiency. Ant Group’s cost-conscious training method may help broaden access to AI by reducing developer’s reliance on premium, high-performing chips.

Despite growing interest in cost-saving strategies, NVIDIA’s Chief Executive Officer Jensen Huang offered a counterpoint at the company’s GTC conference last week. He argued that companies looking to maximize revenue will require more powerful chips, not cheaper ones — suggesting that the future of AI infrastructure lies in performance, not price.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays