OpenAI has officially released GPT-5-Codex-Mini, a cost-effective programming model designed for developers. Following the release of GPT-5-Codex in September, it further expands the application boundaries of intelligent programming.

GPT-5-Codex is based on the GPT-5 architecture and focuses on improving code reasoning and generation capabilities. It can perform various complex operations in real software engineering tasks, including new project creation, feature expansion, test writing, and large-scale code refactoring. In the SWE-bench Verified benchmark test, it achieved a high score of 74.5%, surpassing the previous GPT-5High (72.8%), demonstrating significant performance advantages.

OpenAI

The newly released GPT-5-Codex-Mini is a "lightweight version" that balances performance and cost. Developers can get four times more API call quotas compared to the original version, with only a slight sacrifice in model performance. Test data shows that GPT-5-Codex-Mini scores 71.3%, greatly lowering the usage threshold while maintaining high accuracy.

OpenAI recommends that developers prioritize using GPT-5-Codex-Mini when handling medium to low complexity tasks or when approaching the API call quota limit. Notably, when the usage reaches 90% of the quota, the system will automatically recommend switching to the Mini version to ensure that project progress is not restricted.

Currently, GPT-5-Codex-Mini supports CLI and IDE extensions, and the API interface will be available soon. At the same time, thanks to improved GPU efficiency, OpenAI has increased the API call limit by 50% for ChatGPT Plus, Business, and Edu users; Pro and Enterprise users will also enjoy priority speed and resource allocation.

In addition, the OpenAI team has fully optimized the underlying architecture of Codex, solving the performance fluctuations caused by server traffic and routing load previously, ensuring developers have a stable and consistent API call experience even during peak hours.