Privado Launches Open Source LLM Chat Application MuroChat to Enhance Enterprise Data Protection


Ant Group opensources the trillion-parameter inference large model Ring-1T-preview, the world's first open-source trillion-parameter inference model. The preview version shows outstanding performance in natural language reasoning, achieving a score of 92.6 on AIME25, surpassing all known open-source models such as Gemini 2.5 Pro, and approaching GPT-5's score of 94.6; it also performed well on CodeForces tests.
Alibaba Cloud released the world's first native end-to-end all-modal AI model Qwen3-Omni and open-sourced it. The model supports multi-modal inputs such as text, images, audio, and video, and enables real-time streaming output with fast response. Through text pre-training and multi-modal mixed training, Qwen3-Omni possesses strong cross-modal capabilities and demonstrates advanced performance in multiple fields.
DeepSeek released open-source model DeepSeek-V3.1-Terminus, fixing language inconsistencies and abnormal characters while enhancing programming and search agent performance. Benchmarks show superior performance in non-agent tasks.....
In the field of artificial intelligence, the latest research results released by the Tongyi DeepResearch team have attracted widespread attention. This breakthrough not only elevates AI from 'being able to chat' to 'being able to conduct research', but also demonstrates its outstanding performance in an open manner. Tongyi DeepResearch has achieved state-of-the-art results in multiple authoritative benchmark tests, with overall capabilities even surpassing many internationally renowned models. Moreover, the model, framework, and solutions are fully open-sourced, truly bringing the productivity of deep research to the world.
Recently, ByteDance and the University of Hong Kong jointly launched a new open-source visual reasoning model - Mini-o3, marking another major breakthrough in multi-turn visual reasoning technology. Unlike previous visual language models (VLMs) that could only conduct 1-2 rounds of dialogue, Mini-o3 limited the number of dialogue rounds to 6 during training, but during testing it can extend the reasoning rounds to dozens, greatly enhancing the ability to handle visual questions. The strength of Mini-o3 lies in its deep reasoning in high-difficulty visual search tasks, reaching