Fotor Launches One-Stop AI Long Video Platform Clipfly


The Tsinghua University TSAIL Lab and Shengshu Technology jointly open-sourced the video generation acceleration framework TurboDiffusion, which improves the inference speed of AI video diffusion models by 100 to 200 times with almost no loss in visual quality. This technology performs deep optimization on existing open-source models, achieving real-time generation from minutes to seconds on a single RTX 5090 graphics card, marking a new era in AI video creation.
The Kuaishou Kling 2.6 version introduces two major features: voice and motion control, achieving native audio generation and improving the precision of complex motion processing. Voice control can generate audio effects, vocal tracks, and music that match the video, and supports personalized voice customization.
Apple introduces the multimodal AI model UniGen 1.5, integrating three major functions of image understanding, generation, and editing within a unified framework, significantly improving efficiency. The model leverages its image understanding capabilities to optimize generation results, achieving technological breakthroughs.
SenseTime launches Seko2.0, the world's first AI agent for multi-scene video generation, enabling continuous narratives from single clips. It ensures high consistency in characters, scenes, and style, advancing plot coherence and visual uniformity, scalable for short videos, ads, and education, powered by its proprietary multimodal model.....
The new Medeo AI video generation tool has made significant breakthroughs, supporting complex prompts and real-time modification with natural language. It bids farewell to the traditional one-time generation model, allowing users to iteratively edit content infinitely, greatly enhancing creative freedom.