Ant Group Releases Open Source Code Model CodeFuse-CodeLlama-34B in 4bits Quantized Version


According to QuestMobile data, in the Top 10 list of China's AI-native applications for the third quarter of 2025, Ant Group's AI health application AQ performed outstandingly, rising to the 7th position, becoming the only health-related application on the list. Its user base has already exceeded general AI products like Tongyi and Wen Xiaoyan. Within just over three months of its release, it achieved rapid growth, with a compound growth rate of 83.4% in the third quarter.
Ant Group open-sources dInfer, the first high-performance inference framework for diffusion language models in the industry, significantly improving inference speed. Benchmark tests show that it is 10.7 times faster than NVIDIA Fast-dLLM, achieving 1011 Tokens per second in single inference on the HumanEval code generation task, pushing technology toward practical applications.
Ant Group open-sourced dInfer, the first high-performance diffusion language model framework. Benchmarks show it's 10.7x faster than NVIDIA's Fast-dLLM, achieving 1011 tokens/sec in HumanEval tasks—first to surpass autoregressive models in speed.....
Ant Group open-sources trillion-parameter model Ling-1T, using FP8 for efficient training. It's the largest base model currently, developed by the 'Bailing' team, part of Ling2.0 family with Ling, Ring, Ming series. Ling focuses on general tasks with speed and efficiency.....
Ant Group opensources the trillion-parameter inference large model Ring-1T-preview, the world's first open-source trillion-parameter inference model. The preview version shows outstanding performance in natural language reasoning, achieving a score of 92.6 on AIME25, surpassing all known open-source models such as Gemini 2.5 Pro, and approaching GPT-5's score of 94.6; it also performed well on CodeForces tests.