The latest video generation model Hailuo2.3 from Chinese AI startup MiniMax is now officially launched on the Replicate platform. This upgraded tool has quickly attracted attention in the AI content creation field with its highly realistic physical simulation and smooth motion capture capabilities.

The model supports text and image input to generate high-quality videos, marking another breakthrough in AI for dynamic visual effects, especially suitable for film production, advertising, and digital entertainment. Hailuo2.3 continues the Noise-Aware Computation Redistribution (NCR) architecture of its predecessor Hailuo02, with 2.5 times improved training efficiency and support for native 1080p videos up to 10 seconds long.

According to platform data, the model ranks among the top in global video generation benchmark tests, surpassing Google's Veo3 and performing well in independent image-to-video benchmarks. As a leading AI model hosting platform, Replicate's integration allows developers and creators to access this tool easily via API, with affordable pricing, averaging about $1.5 per 6-second 1080p video. The core highlight of this update is the refined simulation of human physics and movements. The model can generate complex and smooth actions such as flips and dancing, capturing rhythm and natural fluidity, even simulating dynamic details like dancers under neon lights in the rain or scenes in a medieval market at dusk.

In addition, Hailuo2.3 introduces cinematic visual effects, including explosions, expansions, group movements, and fabric textures, supporting style transfer and surreal stylization, helping users quickly iterate creativity from product presentations to narrative short films. In terms of detail handling, the model significantly improves clarity and consistency: text rendering is sharper, faces and objects maintain cross-frame stability, and micro-expression richness has increased significantly, capturing emotional responses in dialogue scenes.

At the same time, the model adheres more strictly to prompt compliance. Descriptions such as "a surfer riding waves under low-angle sunlight" can be accurately converted into coherent videos, avoiding deviations that occurred in previous generations.

Since the launch of the Hailuo series in early 2025, it has accumulated positive feedback on Reddit and professional forums, with users praising its practicality in advertising prototypes and VFX testing. Although the current video length is limited to 10 seconds, its physical accuracy and multilingual prompt support (optimized for English and other languages) make it suitable for mobile content creation, such as iOS and Android apps.

As AI video tools continue to evolve, Hailuo2.3 may further reduce the barriers to professional video production, promoting widespread applications from educational simulations to marketing short films. MiniMax stated that in the future, they will expand multi-shot generation and audio integration features, exploring longer sequence outputs. Creators can currently try the tool through platforms such as Replicate, fal, or Higgsfield AI, and it is expected to spark a new wave of AI media innovation.