The Alibaba Qwen large model team officially open-sourced the sparse mixture-of-experts (MoE) model Qwen3.6-35B-A3B on April 16, 2026, marking a key breakthrough in lightweight models within the field of intelligent agent programming.

QQ20260417-134943.jpg

The model has a total of 35 billion parameters. Thanks to the sparse characteristics of the MoE architecture, its activated parameters during operation are only 3 billion. In terms of performance, Qwen3.6-35B-A3B surpasses the dense model Qwen3.5-27B with 27 billion parameters at a very low computational cost, and significantly outperforms its predecessor Qwen3.5-35B-A3B, demonstrating logical reasoning and intelligent agent collaboration capabilities comparable to large-scale models such as Gemma4-31B.

QQ20260417-134951.jpg

As a fully multimodal open-source model, Qwen3.6-35B-A3B also performs exceptionally well in spatial intelligence and visual perception, achieving a RefCOCO score of 92.0. Some multimodal metrics have already reached the level of Claude Sonnet4.5. Currently, the model has been integrated into Qwen Studio and is available via API service under the name qwen3.6-flash through the Alibaba Cloud BaiLian platform, supporting the preserve_thinking thinking chain retention function, and seamlessly adapting to mainstream AI programming assistants such as OpenClaw, Claude Code, and Qwen Code.

With the surge in demand for edge-side AI and automated intelligent agents, the open-sourcing of Qwen3.6-35B-A3B not only provides developers with a high-performance, low-power option but also signals that MoE models with "small parameters, high intelligence" are becoming the new cornerstone for reshaping programming paradigms and multimodal interactions.