Google has officially launched the "Search Live" feature in more than 200 countries and regions around the world. This upgrade marks a comprehensive shift of mobile search from traditional text/image retrieval to real-time multimodal interaction. Users can now engage in real-time AI conversations with the physical environment through their phone's camera and voice commands within the Google app or Google Lens on Android and iOS devices.

The core power behind Search Live comes from the new Gemini3.1Flash Live model. As a native multilingual audio and speech large model, this model significantly enhances the naturalness and response speed of conversations. In practical applications, users simply need to point the camera at an object and ask questions—such as complex furniture assembly or plant and animal identification—and the system will provide simultaneous voice answers and related web links, achieving seamless integration between the physical world and digital information flow.

This move is seen as a key strategic action by Google to address competitive pressure in AI search. Currently, models like Luma AI's Uni-1 are trying to challenge Google's position in image processing, while OpenAI plans to create a super app by integrating ChatGPT with browser functions. By deploying Search Live globally, Google leverages the lightweight and high-response characteristics of the Gemini3.1Flash Live model, reinforcing its defensive barriers at the mobile entry point.
