Alibaba has announced the release of its next-generation artificial intelligence model, Qwen3-Next, designed to be significantly more powerful and cost-efficient compared to earlier versions. This marks a major step forward in the companyโs ongoing strategy to strengthen its global AI presence.
The newly introduced Qwen3-Next-80B-A3B features 80 billion parameters. It delivers performance up to 10 times higher than its predecessor while reducing training costs to just one-tenth. These breakthroughs were made possible through innovative techniques such as hybrid attention, mixture-of-experts (MoE) architecture, and multi-token prediction strategies. Together, these upgrades enhance efficiency, stability, and the ability to process long-form content with greater accuracy.
Performance and scalability
According to the development team, the new model not only outperforms the Qwen3-32B in key benchmarks but also matches the performance of the larger Qwen3-235B-A22B, which had previously been considered the companyโs flagship model. Despite its smaller size, the Qwen3-Next is optimized for consumer-grade hardware, making it accessible for developers and businesses worldwide.
Additionally, the reasoning-focused variant, Qwen3-Next-80B-A3B-Thinking, has surpassed both Alibabaโs Qwen3-32B-Thinking and Googleโs Gemini-2.5-Flash-Thinking in independent evaluations. This shows the companyโs commitment to creating competitive, open-source AI systems that rival global leaders.
Expansion of the open-source ecosystem
Alibaba has established Qwen as one of the worldโs most extensive open-source AI ecosystems. By releasing its models openly, the company is enabling developers to use, adapt, and distribute advanced AI technologies across industries. This approach also accelerates adoption by lowering costs and improving access to cutting-edge tools.
Earlier this year, Qwen models were optimized for Appleโs MLX framework, enabling iPhones and other Apple devices to run advanced AI applications. Reports suggest that Alibabaโs Qwen models are being used in partnership with Apple Intelligence within China, while globally Apple relies on OpenAIโs GPT models.
Technical innovations
The improvements in Qwen3-Next stem from several architectural innovations. The MoE architecture divides the model into specialized sub-networks, or โexperts,โ which efficiently handle complex tasks. Hybrid attention improves long-text processing, while multi-token prediction enhances output speed and fluency. Enhanced stability during training ensures more reliable results, even on large-scale deployments.
These innovations demonstrate how Chinese AI firms are rapidly narrowing the gap with US-based competitors by leveraging open-source development.
Global positioning
The release of Qwen3-Next follows the launch of Qwen-3-Max-Preview, Alibabaโs largest AI model with over one trillion parameters, which ranked sixth on the LMArena โtext arenaโ leaderboard. With Qwen3-Next, the company is expanding its influence across the global AI landscape while providing cost-effective solutions to developers and enterprises.
By combining scalability, affordability, and powerful performance, Alibaba is positioning Qwen3-Next as a transformative tool in the competitive world of artificial intelligence.

