Recently, a "GPT-5Thinking" model's thought process (Chain of Thought, CoT), allegedly from internal testing, has circulated online, revealing the hidden logical chain of the model in complex reasoning tasks.

OpenAI has officially confirmed the authenticity of the leak through official channels today and emphasized that it is not a security vulnerability, but an innovative feature of the model design. This incident not only exposed GPT-5's unique "thinking language" in solving logic puzzles like Sudoku, but also sparked widespread discussion in the industry about AI's autonomous reasoning capabilities.

QQ20251107-101735.png

According to the leaked documents, "GPT-5Thinking" is a variant model optimized for advanced reasoning, using a hidden chain-of-thought mechanism in its internal reasoning process. This means that when processing questions, the model performs multi-step logical deductions without directly outputting them to the user, thereby improving accuracy and efficiency. In a Sudoku puzzle test example, the model's thought chain used a highly abstract "language," such as "Evaluate grid constraints → Simulate filling paths → Verify conflict thresholds," which differs significantly from traditional natural language output, reflecting OpenAI's breakthroughs in model architecture.

ChatGPT

OpenAI stated in its official statement today: "We confirm that the leaked document originated from internal development documentation. The hidden CoT design of GPT-5Thinking is its core innovation, used to handle complex coding and multimodal tasks, rather than any security risk. We will continue to optimize model transparency to balance user experience and technical security." A company spokesperson also revealed that the model has been tested in enterprise applications, supporting complex programming tasks with "minimal prompts," and has been benchmarked against competitors such as Llama4 and Cohere v2.