In the history of artificial intelligence development, a landmark moment has quietly arrived. Renowned AI expert Ilya Sutskever recently gave an interview, where he systematically outlined his new vision for founding SSI (Safe Superintelligence Lab) after leaving OpenAI. This conversation directly addresses the pain points of the current AI industry: models score well on tests but struggle with real-world tasks. Ilya boldly claims that the "era of scale" has come to an end, and over the next decade, AI will return to the fundamental path of learning like humans, integrating human emotional mechanisms to achieve a leap in safe superintelligence.

As a pioneer in the AI field, Ilya's views have sparked widespread discussion. He emphasizes that AI should no longer blindly pursue parameter stacking but instead shift towards a value-driven learning paradigm. This shift is not only about technological innovation but will also reshape the symbiotic relationship between human society and intelligence. The Current Crisis of AI: High Scores, Low Capabilities

Currently, mainstream large models frequently achieve impressive results in standardized tests, but they often reveal their limitations when applied in real situations. Ilya points out that these models can easily handle predefined tasks, but they frequently encounter "circular errors" in complex scenarios—fixing one bug often leads to new problems. This is not a technical flaw, but rather a fundamental defect in the training paradigm.

During the reinforcement learning phase, developers overly focus on "evaluation optimization," causing models to become like "test-taking students" who only care about scores, neglecting the need for generalization in the real world. Their economic impact is limited, and practical applications face numerous bottlenecks: AI can "score high," but it "can't get things done." Ilya warns that this path has become a dead end and urgently needs to be restructured from the root.

Pre-training vs. Reinforcement Learning: The True Cradle of Intelligence

Ilya breaks down AI training into two pillars: pre-training and reinforcement learning. The former is like a "bias-free data bath," allowing models to naturally project the full picture of the human world from massive information without human intervention; the latter relies on manually designed "sandbox environments," where the goal often becomes "making scores look better."

He openly states that this imbalance causes AI to lose "insight and transferability." Pre-training lays the foundation for broad knowledge, while reinforcement learning becomes a "chaining constraint." In the future, balancing the two will be key—allowing AI to shift from passive responses to active understanding.

The Secret Weapon of Human Intelligence: Emotion-Driven Value Functions

Why can humans navigate a complex world so effortlessly? Ilya's answer is an "inner value system"—that is, the emotional mechanism. It acts like an invisible compass, guiding the direction of learning: happiness reinforces positive feedback, anxiety warns of potential risks, shame calibrates social norms, and curiosity drives endless exploration. In the context of AI, this is equivalent to a dynamic "value function," enabling the system to anticipate "directional deviations" rather than passively waiting for punishment.

Ilya's profound insight: "True intelligence is not just prediction, but a sustainable value system." If AI can internalize the ability for "self-assessment," it will awaken "meaning-driven learning" and truly simulate human wisdom.

The End of an Era: From "Scale Rush" to "Structural Innovation"

Looking back at the past ten years of AI, Ilya divides it into two eras: the "research era" from 2012 to 2020, marked by breakthroughs such as AlexNet and Transformer that lit the torch of innovation; and the "scale era" from 2020 to 2025, which became obsessed with the mindless accumulation of "data, computing power, and parameters." Now, this model has reached its peak: marginal returns are decreasing, and the air of innovation has been "drained."

Ilya declares that the era of scale has ended. Even if computing power continues to expand, "adding more materials" will no longer bring miracles. The next stage will focus on new principles of "learning like humans"—shifting from quantity expansion to structural revolution. Whoever masters emotional generalization will lead the way.

A Ten-Year Vision: The Gradual Dawn of Safe Superintelligence

Looking ahead, Ilya outlines the evolution path of AI: within 5 to 20 years, systems will acquire human-like learning—actively exploring the world, understanding physical and social laws, self-reflecting on biases, and performing cross-modal reasoning with multi-sensory integration.

This leap will trigger a transformation: economic productivity will surge, education and research paradigms will be overturned, and the relationship between humans and machines will enter a new era of "collaborative intelligence." However, opportunities come with risks, and Ilya repeatedly emphasizes "safety first": SSI will follow the principles of gradual deployment and transparent disclosure, ensuring that each stage's capabilities, risks, and control mechanisms are subject to external review, so that the public and government can understand simultaneously.

Ilya's interview serves as a wake-up call, reminding AI professionals: intelligence is not just a pile of cold algorithms, but a warm pursuit of values. SSI's exploration may become a lighthouse toward safe superintelligence, worthy of global attention.