On February 12, 2026, the ByteDance Seed team officially launched the new generation video creation model Seedance 2.0. This version adopts a unified multimodal audio-visual joint generation architecture, marking the transition of AI video generation from "single-point breakthroughs" to the industrial-level application stage of "comprehensive collaboration".
Core Technology Leap: From "Being Able to Draw" to "Understanding Physics"
Compared to version 1.5, Seedance 2.0 has significantly improved usability in complex interaction and motion scenarios. The model overcomes logical challenges in high-difficulty actions such as pair figure skating and multi-person competitions through its excellent physical restoration capabilities, ensuring continuity and realism during movement. At the same time, the new version supports 15-second high-quality multi-angle output and integrates stereo dual-channel audio technology, achieving an immersive audio-visual experience with synchronized sound and picture.

Full-Modal Versatility: Director-Level Control Freedom
Seedance 2.0 completely breaks down the boundaries of materials. It not only supports four modal inputs including text, images, audio, and video, but also allows users to simultaneously introduce up to nine images and multiple audio-visual materials as references. Creators can precisely specify composition, camera movement, and even text-based storyboards, achieving precise control where "what you imagine is what you see".
Editing and Expansion: More Than Just Generation
To align with industrial-level creative workflows, Seedance 2.0 adds powerful video editing and extension capabilities. Users can make targeted modifications to specific segments or character actions, or continue the scene based on prompts. This "continue shooting" capability greatly reduces the production barriers and costs in the film, advertising, and e-commerce fields.
Currently, Seedance 2.0 is available on
Address: https://seed.bytedance.com/zh/seedance2_0
