Three weeks ago, GPT-5.5 was just launched, and news about GPT-5.6 has already spread in advance.

Renowned leaker Leo revealed that the development of GPT-5.6 is now fully underway, with the first internal checkpoints starting testing in the past few days. It's likely to be officially unveiled next month. More interestingly, someone found traces of rollout mapping in OpenAI's internal Codex logs — most calls still point to GPT-5.5, but one record clearly points to GPT-5.6. In other words, Codex might already be secretly using it for testing. Internal code names have also been accidentally exposed: ember-alpha and beacon-alpha.

Netizens exclaimed: This iteration speed is impossible to keep up with.

image.png

Speed once again pushed to the limit — Codex is about to launch ultrafast mode

GPT-5.6 hasn't even arrived yet, but OpenAI's product side can't hold back anymore.

Leaker Chetaslua also revealed that this Thursday, OpenAI will launch the "ultrafast mode" on Codex, increasing response speed by 2 to 3 times, specifically designed for latency-sensitive tasks. At the same time, the gpt-image-2 leading the Image Arena rankings with +242 points is also undergoing A/B testing updates simultaneously.

This isn't the first time. When GPT-5.4 was released in March this year, the /fast mode already achieved a 1.5 times speedup; GPT-5.3-Codex-Spark, powered by Cerebras chips, even increased reasoning speed to over 1000 tokens per second, 15 times faster than the normal mode. This time, the ultrafast mode directly achieves acceleration on the flagship main model — not a cut-down version, nor a small model, but a full-sized acceleration.

image.png

For developers, all areas that require "waiting" — such as Agent loops, long task pipelines, and browser automation — will experience a completely different experience.

By the way, last week, the official ChatGPT account posted an image with a sci-fi feel, and the classic "Ask ChatGPT" on the interface was quietly replaced with "Message ChatGPT". Otto said "Call me maybe", which completely sparked speculation about ChatGPT integrating voice calls or launching hardware products.

Codex vs Claude Code, the subsidy war is officially underway

The most dramatic scene in Silicon Valley unfolded today.

Just before the launch of OpenAI's ultrafast mode, Anthropic took the initiative — announcing that starting June 15, paid users will get a 50% increase in programming quotas monthly, covering Claude Agent SDK, command-line tools, and Claude Code deeply integrated with GitHub workflows, and also released Opus4.7Fast mode, focusing on faster advanced reasoning and smoother long context encoding experience than Codex.

OpenAI didn't make people wait long. In response, they launched a powerful subsidy: any company that wants to migrate to Codex from other platforms within the next 30 days will receive two months of free usage. Calculated at $200 per month for the Pro plan, it's equivalent to giving away $400. Otto himself personally joined the campaign, stating that Codex is "the strongest AI programming product available on the market".

Only a few hours later, the report came out: 2000 developers contacted OpenAI within 3 hours.

This face-to-face confrontation stunned the entire Silicon Valley. Developers, however, were happy — getting subsidies from two top companies like this is something you can't find anywhere else.

The flywheel has already started spinning

When looking at these two events together, a trend more worth attention than any single news is taking shape.

AI is accelerating its own evolution. GPT-5.3-Codex was the first "participating in its own training" model from OpenAI. By GPT-5.5, 85% of OpenAI's employees used Codex to write code weekly. It's almost certain that GPT-5.6 was developed with the deep involvement of GPT-5.5 — AI is helping OpenAI build stronger AI.

At the same time, the widespread adoption of programming tools is unleashing unprecedented engineering productivity. Codex already has 3 million weekly active users, and the user base of Claude Code is also growing rapidly. Millions of developers are treating AI programming tools as their daily standard, and AI-generated code further feeds back into AI's training and deployment. This cycle will only spin faster and faster.

Otto said that OpenAI's goal is no longer just AGI, but directly aiming for ASI.

When model iteration speed is driven by AI itself, when AI programming tools become the infrastructure for AI development, and when two trillion-dollar companies use a "subsidy war" to accelerate this process —

The flywheel toward ASI has already begun to spin on its own.