Zhipu AI officially announced that its latest open-source lightweight model GLM-4.7-Flash has already exceeded 1 million downloads on the international mainstream open-source community Hugging Face just two weeks after its release.

As a hybrid thinking model with 30B-A3B, GLM-4.7-Flash demonstrates strong performance. According to the latest benchmark test data:

  • In mainstream tests such as SWE-bench Verified and τ²-Bench, its overall performance has surpassed gpt-oss-20b and Qwen3-30B-A3B-Thinking-2507.

  • In models of the same size and similar sizes, this model has successfully achieved the open-source SOTA (State-of-the-Art) score.

Zhipu AI stated that GLM-4.7-Flash is designed to provide global developers with a lightweight deployment option that balances reasoning efficiency, practical application performance, and high cost-effectiveness. The achievement of a million downloads marks a high level of recognition for this model in the open-source ecosystem, further promoting the popularization and application of large model technology.

QQ20260204-135257.png