At 2 a.m., OpenAI rolled out its latest model, GPT-5.2, into production. Without a flashy launch event, only a 12-minute test video was released: a 50-page quarterly report, from a blank template to images, data, and speaker notes, all generated by AI in 180 seconds with zero errors. CEO Sam Altman made a quantitative commitment in the subsequent phone briefing—“Professionals using GPT-5.2 will reclaim at least 10 hours per week.”

On the technical side, OpenAI has, for the first time, integrated "Mixture of Experts + Dynamic Caching" into the same set of weights: input goes through a lightweight router first, which automatically determines the task type, then sends the request to the corresponding sub-network. The result is that latency for three high-frequency scenarios—writing code, creating tables, and making PPTs—has decreased by 42%, 37%, and 51% respectively, and the score for logical consistency tests has risen from 85.6 to 92.3. More importantly, the cost: the official pricing is $1.75 per million input tokens and $14 for output. Input caching gets a 10% discount, bringing the cost of a 3,000-word industry report down to as low as 0.8 cents, cheaper than printing paper.

In terms of competition, OpenAI directly targeted Google's Gemini 3. Internal benchmarks show that in the “agent AI” category, GPT-5.2 has a success rate 18 percentage points higher—under the same instruction, the model can continuously call Excel, Slack, Notion, GitHub without human intervention. Altman said directly, "Gemini is still chasing scores, while we are running businesses."

The adult mode was also officially announced, expected to be open next year. This version will relax content policies, supporting private deployment in high-risk fields such as healthcare, law, and finance. Enterprises can install the model in local server rooms, keeping data within the walls. OpenAI emphasized that the adult mode will include trackable audit logs, meeting dual compliance with the EU AI Act and China's deep synthesis regulations.

Once the pricing list was released, the capital market voted immediately: OpenAI API usage surged to twice the previous month's peak within three hours after the preview channel was opened, and the average waiting time for Microsoft Azure GPT queues increased from 20 seconds to 3 minutes. Small and medium-sized enterprises now face truly "vegetable prices"—with the caching discount, a 50-person startup’s monthly AI costs for coding, writing copy, and customer service can be reduced to $300, equivalent to just one day's salary of an intern.