Alibaba has announced the release of its next-generation artificial intelligence model, Qwen3-Next, designed to be significantly more powerful and cost-efficient compared to earlier versions. This marks a major step forward in the company’s ongoing strategy to strengthen its global AI presence.
The newly introduced Qwen3-Next-80B-A3B features 80 billion parameters. It delivers performance up to 10 times higher than its predecessor while reducing training costs to just one-tenth. These breakthroughs were made possible through innovative techniques such as hybrid attention, mixture-of-experts (MoE) architecture, and multi-token prediction strategies. Together, these upgrades enhance efficiency, stability, and the ability to process long-form content with greater accuracy.
Performance and scalability
According to the development team, the new model not only outperforms the Qwen3-32B in key benchmarks but also matches the performance of the larger Qwen3-235B-A22B, which had previously been considered the company’s flagship model. Despite its smaller size, the Qwen3-Next is optimized for consumer-grade hardware, making it accessible for developers and businesses worldwide.
Additionally, the reasoning-focused variant, Qwen3-Next-80B-A3B-Thinking, has surpassed both Alibaba’s Qwen3-32B-Thinking and Google’s Gemini-2.5-Flash-Thinking in independent evaluations. This shows the company’s commitment to creating competitive, open-source AI systems that rival global leaders.
Expansion of the open-source ecosystem
Alibaba has established Qwen as one of the world’s most extensive open-source AI ecosystems. By releasing its models openly, the company is enabling developers to use, adapt, and distribute advanced AI technologies across industries. This approach also accelerates adoption by lowering costs and improving access to cutting-edge tools.
Earlier this year, Qwen models were optimized for Apple’s MLX framework, enabling iPhones and other Apple devices to run advanced AI applications. Reports suggest that Alibaba’s Qwen models are being used in partnership with Apple Intelligence within China, while globally Apple relies on OpenAI’s GPT models.
Technical innovations
The improvements in Qwen3-Next stem from several architectural innovations. The MoE architecture divides the model into specialized sub-networks, or “experts,” which efficiently handle complex tasks. Hybrid attention improves long-text processing, while multi-token prediction enhances output speed and fluency. Enhanced stability during training ensures more reliable results, even on large-scale deployments.
These innovations demonstrate how Chinese AI firms are rapidly narrowing the gap with US-based competitors by leveraging open-source development.
Global positioning
The release of Qwen3-Next follows the launch of Qwen-3-Max-Preview, Alibaba’s largest AI model with over one trillion parameters, which ranked sixth on the LMArena “text arena” leaderboard. With Qwen3-Next, the company is expanding its influence across the global AI landscape while providing cost-effective solutions to developers and enterprises.
By combining scalability, affordability, and powerful performance, Alibaba is positioning Qwen3-Next as a transformative tool in the competitive world of artificial intelligence.

