On April 9th Beijing Time, Meta officially launched its personal super intelligent model Muse Spark, marking the release of the first product in its new Muse series. The model is natively multi-modal, supports deep reasoning, tool calling, visual thinking chains, and multi-agent collaboration, and is positioned as a "personal super intelligence." It has already been launched on the Meta.ai website and the Meta AI app.

Contemplating Mode: Outstanding Multi-Agent Parallel Reasoning Performance

Muse Spark's Contemplating mode uses a multi-agent parallel reasoning architecture and achieved a score of 58% on the Humanity's Last Exam benchmark and 38% on the FrontierScience Research benchmark, directly competing with Gemini3.1Deep Think and GPT5.4Pro. This performance demonstrates its strong competitiveness in complex reasoning tasks.

image.png

Computational Efficiency Revolution: Only 1/10 the Power of Llama4Maverick for the Same Performance

Compared to Meta's own Llama4Maverick, Muse Spark requires more than 10 times less computing power to achieve the same level of performance. This breakthrough in efficiency opens up broad possibilities for personal users and lightweight deployment scenarios.

Natively Multi-Modal Architecture: Visual Capabilities Rebuilt from the Ground Up

Muse Spark adopts a natively multi-modal architecture designed from the ground up to integrate visual information, rather than being added later. This design enables it to perform well in visual STEM problems, entity recognition, and positioning tasks. The most direct demonstration is that users can take a photo, and the model can automatically generate a complete Sudoku game, showcasing its strong visual understanding and generation capabilities.

Health Reasoning Technology: Trained with Over 1,000 Doctors

In the health sector, Muse Spark was trained in collaboration with over 1,000 doctors, enabling it to generate highly interactive health information displays. For example, when users upload photos or data about their diet, the model can analyze nutritional components and use red and green dots to visually highlight recommended and non-recommended foods, helping users make scientific decisions quickly.

Openness and Implementation: API Preview Launched Simultaneously