Google has officially opened its search engine's Gemini Canvas AI mode to all English-speaking users in the United States today, marking this feature—previously limited to lab testing—entering a large-scale commercial phase. As an important move in Google's generative AI strategy, Canvas aims to transform traditional search experiences into deep collaborative creation spaces, helping users complete complex tasks from project planning to code generation.

Google's large model Gemini

In terms of functionality, Canvas allows users to directly access the canvas through the tool menu in AI mode, integrating information from the web and Google's Knowledge Graph via the sidebar. This tool not only assists with creative writing and document refinement but also has cross-media processing capabilities, such as converting research reports into study guides, web pages, or audio summaries. For developers, Canvas provides an interactive environment for real-time code generation and testing, allowing users to continuously refine application prototypes by conversing with Gemini. Currently, users subscribed to Google AI Pro and Ultra can access the Gemini3 model and a 1 million token long context window in this mode to meet more demanding professional needs.

This update reflects Google's efforts to build competitive barriers through its vast search entry points. Unlike OpenAI's Canvas feature, which automatically triggers, Google emphasizes user interaction initiative. As Canvas is fully integrated into the search ecosystem, Google aims to demonstrate that it can not only provide information retrieval but also deeply participate in users' productivity workflows. In the current trend of AI applications shifting from "question-and-answer" to "collaborative" models, Google, leveraging its search business reach, is actively pushing large model tools from niche communities to the mass market, further intensifying competition with Anthropic and OpenAI in the intelligent workspace field.