Translation software is used every day, but the underlying models are often large and usually require connecting to cloud servers to run. The HY-MT1.5 series launched by Tencent breaks this limitation, offering two versions to meet different scenarios:
1.8B Small Version: Don't be fooled by its small size; it can compete with much larger models in performance. After optimization, it only needs about 1GB of memory to run smoothly on edge devices like phones. Translating a sentence of about 50 Chinese characters takes an average response time of 0.18 seconds.
7B Upgrade Version: It is an upgraded version of the system that won the WMT25 championship. It is more suitable for handling complex mixed-language translation, specialized terminology parsing, and specific format translation, mainly aimed at server deployment.

Why is it smarter than before?
To make the model better understand human language habits, Tencent's research team adopted a "five-step" training method:
Build a language foundation: First, let the model learn massive multilingual texts to master basic language rules.
Specialized training: Input a large amount of parallel data specifically, so it goes from "speaking" to "translating."
Refining: Use high-quality document data for fine-tuning, making the translation more natural.
Master and apprentice (distillation): Let the 7B large model guide the 1.8B small model, passing on the wisdom of the large model to the small one, so that even though it is small, its brain remains smart.
Human evaluation: Finally, introduce human aesthetic standards to optimize accuracy, fluency, and cultural differences.
Field Test Performance: Exceeding Some Mainstream Large Models
In multiple authoritative tests, the performance of this model was impressive:
In international tests such as WMT25, the 7B version scored higher than Gemini3.0Pro and many professional translation models.
Even in niche areas like "Mandarin to ethnic minority languages," its performance remained outstanding.
The 1.8B version received higher scores in real human testing than mainstream commercial translation systems like Baidu, Google, and Microsoft.
Not only does HY-MT1.5 translate accurately, but it also solves some practical usage pain points:
Accurate terminology: You can tell it how to translate specific words. For example, "Hunyuan珠" must be translated as "Chaos Pearl"; it will not cleverly translate it into another word on its own.
Context awareness: For instance, the word "pilot" might be translated as "pilot" without context, but if the text is about a TV show, it can intelligently identify it as "pilot episode."
Format preservation: If you are translating a segment with HTML tags or special formatting, it can perfectly preserve the original tag structure while translating the content.
Currently, Tencent has open-sourced the model weights on
github:https://github.com/Tencent-Hunyuan/HY-MT
huggingface:https://huggingface.co/collections/tencent/hy-mt15
