On March 9, 2026, the Super Intelligence team of Xiaohongshu officially released the image editing model FireRed-Image-Edit v1.1. This update was released less than a month after the v1.0 version, marking a significant acceleration in the iteration speed of Xiaohongshu's multimodal large model field.

This model has undergone deep optimization for complex scenarios such as ID consistency editing, multi-element fusion, portrait makeup, and font style references, while maintaining the advantages of its predecessor. It demonstrates stronger semantic understanding and visual generation capabilities. Technical indicators show that the v1.1 version now supports full-process optimization for training and deployment, with inference time reduced to 4.5 seconds and memory usage controlled within 30GB, significantly improving the engineering feasibility for industrial applications.

Xiaohongshu

As the core engine for Xiaohongshu's layout of general intelligence, the Super Intelligence team has now fully open-sourced the project's code, technical reports, model parameters, and training distillation inference framework. This move not only provides a foundational technological support for Xiaohongshu's internal creation, publishing, search recommendation, and commercial advertising business lines, but also complements the industry's lack of fine-grained image editing tools through an open-source ecosystem.

In the context of global large model competition entering the deep application phase, the rapid evolution and open-source of the FireRed series reflect that leading internet platforms are trying to lower the barriers of multimodal technology and build a differentiated AI competitiveness centered on content creation. This strategy of shifting from single model development to high-performance engineering deployment will further promote the scenario-based application of multimodal intelligence in content e-commerce and social scenarios.