Recently, the Jan team officially released a new multimodal large model called Jan-v2-VL-Max. This 30B parameter model does not blindly pursue generality but precisely targets the core pain point of "long-term execution tasks," aiming to solve the problem of AI "failing" in complex automation processes.

The technical foundation of this model comes from Qwen3-VL-30B-A3B-Thinking. To improve stability in multi-step operations, the Jan team introduced LoRA-based RLVR technology. The sophistication of this technology lies in its ability to effectively reduce error accumulation during multi-step execution, thereby significantly suppressing the "hallucination" phenomenon commonly seen in AI when handling long tasks.

image.png

In terms of actual performance, Jan-v2-VL-Max has even surpassed well-known models such as Gemini2.5Pro and DeepSeek R1 in the "hallucination-decreasing return" benchmark test, which measures execution stability. This means it has stronger reliability when handling tasks such as Agent automation, UI interface control, and other tasks requiring high logical consistency.

Currently, developers and AI enthusiasts can experience it directly through the web interface or use vLLM for private deployment locally. As a member of the Jan ecosystem that focuses on offline operation and respects privacy, the addition of this new model undoubtedly provides users who pursue localized AI automation with a more powerful tool option.

huggingface:https://huggingface.co/janhq/Jan-v2-VL-max-FP8