Runway and Getty Collaborate to Launch Generative AI Video Model RGM for Hollywood and Advertising Industry


Runway introduces GWM-1, a universal world model that simulates physics and time evolution via pixel prediction, joining the 'world model' race with giants like Google and OpenAI to build core infrastructure for embodied and general AI.....
Runway's Gen-4.5 video generation model enhances visual accuracy and creative control, enabling users to create high-definition dynamic videos from brief text prompts, supporting complex scenes and vivid characters, trained and inferred on Nvidia GPUs for optimized precision and style.....
Runway's latest model, Gen-4.5, defeated Google's Veo3 and OpenAI's Sora2Pro on the third-party blind testing platform Video Arena, becoming the first large model to ascend to the top by a small team. Its CEO emphasized the feasibility of focusing on research and rapid iteration, pointing out that a team of 100 people can challenge a trillion-dollar company not by budget, but by density. The model uses a self-developed space-time hybrid Transformer architecture, demonstrating breakthroughs in AI video generation by a small team.
Runway launches a video model fine-tuning tool for partners to customize AI models in verticals like robotics and education, enhancing performance with less data and computation.....
The Wan team under Alibaba officially open-sourced the Wan2.2-Animate-14B (referred to as Wan-Animate) model, which has quickly become a focus in the AI video field. This high-fidelity character animation generation framework addresses two major pain points of 'character animation generation' and 'character replacement' with a single-model architecture. It allows users to upload a single image or video, enabling accurate transfer of expressions and actions, as well as environmental integration, greatly lowering the barrier to video creation. The model weights and inference code have been uploaded to the Hugging Face platform,