As the global competition for large models enters deeper waters, Alibaba Cloud responds to the challenges with a more dense and open model matrix. On October 22, the Qwen3-VL team officially launched two new dense models in the Qwen3-VL family - 2B and 32B. These models not only fill key positions in the existing product line but also increase the number of open-source models in the entire series to 24, building a complete technical ecosystem from lightweight to ultra-large scale.

So far, the Qwen3-VL family has four dense models (2B, 4B, 8B, 32B) and two Mixture of Experts (MoE) architecture models (30B-A3B and 235B-A22B), covering parameter scales ranging from 2 billion to 235 billion, accurately meeting the full range of scenarios from edge device deployment to cloud-based ultra-large-scale inference. More notably, all models provide two versions: Instruct (instruction-tuned) and Thinking (reasoning-enhanced), allowing developers to choose flexibly based on task characteristics.

To balance performance and efficiency, Alibaba Cloud also released 12 FP8 quantized version models. These lightweight variants significantly reduce memory usage and inference latency without sacrificing much accuracy, enabling high-performance multimodal AI to be rapidly deployed in more practical business scenarios.

All open-source weight models of Qwen3-VL are now fully available. Users can download them for free on the ModelScope community and the Hugging Face platform, and they support commercial use. This strategy greatly reduces the barrier for enterprises to access cutting-edge multimodal capabilities and provides an immediately usable technological foundation for academia and startup teams.

In a time when closed-source models are building high walls, Alibaba Cloud chooses to break through with an open-source ecosystem. The continuous expansion of Qwen3-VL is not only a demonstration of technical strength but also a firm commitment to open collaboration and inclusive intelligence. As the model family continues to grow, Tongyi Qianwen is moving from "usable" to "user-friendly" and "easy to use," accelerating the transition of AI capabilities from laboratories to real-world scenarios across various industries.