On December 18, the AI search platform Perplexity announced that its latest integrated large model, Gemini3Flash, is now fully available to Pro and Max subscribers. This update marks a key step forward for Perplexity in improving response speed and reasoning efficiency.

Gemini3Flash is a lightweight and high-performance model recently launched by Google, featuring low latency and high throughput. It maintains strong language comprehension capabilities while significantly optimizing reasoning costs and response speed. Perplexity stated that this model will provide users with faster and smoother Q&A experiences, especially suitable for complex query scenarios requiring real-time information integration and multi-turn interactions.

QQ20251218-145857.png

According to the official introduction, Pro and Max users do not need any additional actions to directly access Gemini3Flash within the existing interface. The system will intelligently select the most suitable model based on the query type, ensuring the best balance between speed and accuracy. Additionally, the model has been specifically optimized for multilingual support and code understanding, further expanding Perplexity's application boundaries in professional fields.