Google DeepMind launched Lyria 3 Pro on March 25. Just six weeks after the previous version of Lyria 3 was released, this upgrade focuses on one key thing: extending the generation time from 30 seconds to 3 minutes while allowing the model to truly understand the internal structure of a song.
This is not a minor update. 30 seconds is enough to generate background sound effects, but not enough to write a full song—no verses, no twists, no climax. The new "structure-aware" capability in Lyria 3 Pro allows users to specify elements like intro, verse, chorus, and bridge in their prompts, and the model arranges the transitions and dynamic changes accordingly. This marks a crucial step for AI music tools moving from "generators" to "creative tools."

Suno and Udio have been around for a year
To be honest, this capability has already been available in Suno and Udio since early 2025, with longer generation times and more flexible structural control. Google joining at this point shows that it is seriously entering the competition in the AI music space—backed by the distribution power of the Gemini ecosystem, the user base of Lyria 3 Pro will be much larger than any independent AI music tool.
The simultaneous release of Vertex AI is another signal: Google does not just want to create consumer-facing tools, but also to embed Lyria into enterprise workflows.
What can it do?
It supports input in text, images, and videos, and the model automatically matches the music style based on the content's emotion. The generated content includes vocals, lyrics, and instruments, covering multiple languages. All outputs are automatically embedded with SynthID watermarks to indicate the AI source—this is a consistent approach from DeepMind in content tracing.
Who can use it and how?
Gemini App paid users can now use it. It is tiered by plan: AI Plus generates about 10 songs per day, Pro about 20, and Ultra about 50. Free users remain on the 30-second version of Lyria 3.
It supports languages including English, Japanese, Korean, Hindi, Spanish, Portuguese, German, and French, and is limited to users aged 18 and above. The operation path is: Gemini App → Create Music → choose "Thinking" or "Pro" mode.
Developers can access it through Google AI Studio and Gemini API; Vertex AI is now in public preview, targeting enterprise-level on-demand generation scenarios. Google Vids and its music production tool ProducerAI have also begun integration. Enterprise Workspace support is expected to be available within a few days.
Copyright issues remain unresolved
Google states that the use of training data follows agreements with artists, but has not disclosed specific sources or licensing scope. This is under the same context as the copyright lawsuits faced by Suno and Udio—legal disputes over AI music training data have yet to reach a conclusion in the industry, and Google's statement is more of a position declaration than a complete answer.
Lyria 3 Pro is currently being gradually rolled out to users, and there may be delays in some regions.
