China's large-scale models have achieved another major breakthrough. China Telecom's Artificial Intelligence Research Institute (TeleAI) has recently officially open-sourced the Star Semantic Large Model TeleChat3 series, which includes the first domestic large-scale parameter fine-grained MoE model based on fully indigenous computing power in China — TeleChat3-105B-A4.7B-Thinking, as well as a dense architecture model TeleChat3-36B-Thinking. The entire series of models were trained using the fully indigenous computing pool in Shanghai Lingang, with a basic training data of 150 trillion tokens, marking a key step forward in China's autonomous control over ultra-large-scale AI models.

image.png

 Full Domestication: Full-Stack Compatibility from Chips to Frameworks

The TeleChat3 series is deeply compatible with the Huawei Ascend ecosystem:

- Supports the Ascend Atlas800T A2 training server;

- Developed based on the MindSpore framework;

- The entire training and inference process runs on domestic AI computing infrastructure.

This move not only verifies the capability of domestic software and hardware stacks to support billion-parameter large models, but also provides the industry with a secure, reliable, and alternative technical path, which has strategic significance for ensuring the security of AI infrastructure supply chains.

image.png

 Innovative "Thinking Mode": Making AI Reasoning Process Traceable

The TeleChat3 series introduces a "Thinking (Thought) Mode" mechanism — by adding specific guiding symbols in the dialogue template, the model can automatically generate intermediate reasoning steps, significantly improving logic and accuracy in complex tasks. In six core dimensions — knowledge questions, mathematical reasoning, content creation, code generation, and intelligent agents (Agents), its performance is comparable to leading international models.

For example, in solving math problems, the model no longer just outputs the answer, but shows the complete thought chain — "understanding the question → breaking down the steps → applying formulas → verifying the result", greatly enhancing credibility and debuggability.

 Open Source and Open, Empowering the Industrial Ecosystem

Currently, the model weights, inference code, and usage examples of the TeleChat3 series have been synchronized to GitHub and ModelScope platforms, supporting academic research and commercial applications. China Telecom stated that it will continue to promote the deployment of the model in key areas such as government affairs, communications, energy, and finance, helping the "Artificial Intelligence +" initiative deeply penetrate the core of industries.

 AIbase Observation: Domestic Large Models Enter a New Stage of "Full-Stack Self-R&D + Capability Benchmarking"

The release of TeleChat3 is not only a display of technological achievements, but also a substantial implementation of China's AI industry self-reliance strategy. When a billion-parameter MoE model can be efficiently trained on purely domestic computing power, and when the "Thinking Mode" approaches international advanced levels, domestic large models are moving from "usable" to "good to use" and even "trustworthy to use."

Against the backdrop of increasingly "geopolitical" global AI competition, China Telecom, with TeleAI as a lever, is building a secure, open, and high-performance domestic AI technology stack. Whether this path succeeds or fails may determine China's voice in the future intelligent era.

Project Address: https://github.com/Tele-AI/TeleChat3