SenseTime debuts its "Muneng" embodied intelligence platform, bringing a new force of change to the smart terminal industry.
The "Muneng" embodied intelligence platform is powered by SenseTime's embodied world model, with SenseTime's large-scale infrastructure providing solid computing power support on both edge and cloud sides. With this powerful combination, the platform can endow robots and smart devices with excellent perception, visual navigation, and multimodal interaction capabilities, pushing smart terminals toward higher levels of autonomy and intelligence.
This platform has remarkable empowerment capabilities, capable of widely empowering various terminal hardware such as robots, helping them achieve accurate perception and deep understanding of all things in the world. More notably, it supports integration into edge chips, demonstrating strong scenario adaptability, and can function stably in any complex environment.
The powerful functions of SenseTime's embodied world model go far beyond this. It can also generate multi-view videos and ensure high consistency in both time and space. This feature allows machines to deeply understand, generate, and edit the real world, achieving deep interaction with the world in terms of space, making ideas like "playing 'Need for Speed' on real street scenes" possible, bringing infinite imagination to industries such as gaming and film.
Additionally, the model can build a 4D real-world representation for people, objects, and environments. Users only need to input simple prompts, such as "look for something on the shelf in the kitchen area," "enter the entertainment room, turn right, then open the door to the yard," and the embodied world model can autonomously generate poses, action skeletons, and instructions, greatly reducing the barrier to human-machine interaction and enhancing the convenience and intelligence of the interaction.