Google has recently announced a new video generation option, "Veo 3.1 - Lite [Low Priority]" mode, exclusively for Ultra subscribers, aimed at enhancing users' creative frequency and cost-effectiveness. The most significant core feature of this mode is that it does not require additional subscription credits, complementing the existing "Veo 3.1 - Fast [Low Priority]" mode, further lowering the entry barrier for high-quality AI video generation.
As the current lowest-cost and fastest response solution in Google's video product line, Veo 3.1 Lite has an operational cost less than half of the Fast version, yet maintains the same generation speed as the Fast version. According to official plans, Google will officially retire the "Veo 3.1 Fast - Low Priority" option on May 10 and fully replace it with the Lite version, while the standard Veo 3.1 Fast version will retain its existing pricing system and continue operations. This move is seen as a precise adjustment by Google targeting heavy users' creative habits, encouraging paid users to make more creative attempts without losing their core assets (credits).
In terms of market context, during the competitive gap caused by OpenAI's suspension of the Sora project, Google has established a significant leading position in the Western AI video generation field, leveraging its deep computing resource reserves. Although the detailed differences in video generation quality still need market validation, the launch of Veo 3.1 Lite marks Google's attempt to build a higher ecological barrier in the fierce global video model competition by utilizing its large-scale computing power advantages through differentiated pricing and service tiering.
This strategic move indicates that the AI video industry is shifting from a single "quality competition" to a comprehensive contest involving "computing efficiency and user engagement." By offering a lightweight generation solution without requiring credits, Google is expected to further accumulate user data and optimize model inference costs, supporting its long-term leadership in the multi-modal large model field.
