Recently, the Yuanbao official released a report on the use of the HuanYuan large model on Yuanbao in 2025. The "Yuanbao" platform has achieved multi-dimensional upgrades in AI capabilities by relying on the HuanYuan series models.

image.png

The HuanYuan model achieves "fast thinking" and "deep thinking" in parallel on the Yuanbao platform: more than 70% of user requests choose the fast thinking mode, and nearly half of the questions can be satisfactorily resolved in the first round; the deep thinking mode is suitable for complex scenarios, and related conversations usually last more than three rounds, with nearly 50% able to output multi-step, structured content.

0.png

In the field of image interaction, the HuanYuan T1-Vision model was launched in May, supporting the parsing of up to 10 images at once, allowing users to directly upload images to query information; the HuanYuan 2.1 image-to-image model enables "one-sentence image editing," simplifying the image processing workflow. After the release of HuanYuan Image 3.0 in September, users can generate images with text just by providing a textual description, and it can also be used to create emoticons; in November, HuanYuanVideo 1.5 was launched, allowing users to generate videos by inputting text or images, with convenient operation and fast speed.

In multimodal interaction, Yuanbao integrated the HuanYuan Voice model, achieving low-latency voice calls, supporting scenarios such as storytelling and continuing the conversation; at the same time, it integrated a multimodal understanding model, adding video call functionality, where AI can recognize the content of the call screen in real time.