On April 15, the Xiaohongshu AI platform team quietly did something significant in the tech circle — officially open-sourced a large model reinforcement learning training engine called Relax.

Relax is clearly designed for multi-modal and Agentic scenarios. In other words, it is not just for text; this engine can handle various input forms such as images, audio, video, and more, all unified and flexibly integrated. This approach aligns well with the current direction of AI development — multi-modal and agents (Agent) are recognized as the next major battleground in the industry.

In terms of technology, Relax introduces two core mechanisms: modal-aware parallelism and end-to-end asynchronous pipelining. The former allows the system to intelligently allocate computing resources based on the characteristics of different modalities, while the latter reduces waiting and idle time during training through an asynchronous pipeline design. Together, these mechanisms aim to improve the overall efficiency and scalability of multi-modal training — a practical engineering value for AI teams requiring large-scale training.

Notably, the act of open-sourcing itself is worth attention. Xiaohongshu is not traditionally an AI infrastructure company. By actively opening its internal training engine to the public, it demonstrates the depth of its accumulation in the field of AI engineering, and also extends an olive branch to the developer community — using technical contributions to gain ecological influence. This is a path increasingly chosen by tech companies in the AI era.

In the AI arms race, Xiaohongshu's move was quite unexpected.