The Alibaba Qwen team officially launched the Qwen3.5 small model series today, including four lightweight models: Qwen3.5-0.8B, Qwen3.5-2B, Qwen3.5-4B, and Qwen3.5-9B, along with their corresponding base versions.
These small models are all built on the unified powerful architecture of Qwen3.5, featuring native multimodal capabilities (supporting image-text processing), improved model structures, and scalable reinforcement learning training, achieving higher intelligence levels with fewer computing resources. 0.8B / 2B: Extremely compact and fast in inference, optimized for edge devices, making them ideal for mobile devices, IoT devices, and low-latency real-time interaction scenarios;
4B: A powerful multimodal foundation model, especially suitable as the core of lightweight agents (Agent), offering excellent performance and resource consumption balance;
9B: Compact in size but with exceptional performance. Official and community tests show that it is close to or even comparable to much larger models (such as gpt-oss-120B), which is impressive.

The Qwen team stated that this series aims to better support academic research, rapid experimentation, and industrial-level practical innovation deployment. With this release, the Qwen3.5 family now covers a complete size gradient from 0.8B to 397B-A17B, further improving the open-source ecosystem. The models are now available on Hugging Face collection page: https://huggingface.co/collections/Qwen/qwen35
and ModelScope MoDa community: https://modelscope.cn/collections/Qwen/Qwen35. Developers can immediately download and experience them, significantly lowering the threshold for edge-side and local deployment. The competition in the small model arena continues to heat up!
