As the competition for artificial general intelligence (AGI) heats up, Microsoft AI CEO Mustafa Suleyman has sounded an alarm to the entire industry: we are dangerously conflating two critical concepts—“control” and “alignment.” On the social platform X, he stated plainly that an AI system that cannot be effectively controlled is inherently unreliable and dangerous, no matter how “aligned” its goals are with human values.
Suleyman emphasized that the industry has overfocused on making AI “understand” human intent (i.e., alignment), while neglecting a more fundamental prerequisite—ensuring the system's behavior always stays within human-set hard boundaries (i.e., control). He likened it to saying: “You can't control an out-of-control system, even if it claims to 'love you.'” In his view, before pursuing superintelligence, it is essential to prioritize building verifiable and enforceable control frameworks, which are an unbreakable bottom line for AI safety.
This perspective is further elaborated in his recent article titled "Humanist Superintelligence" published on the Microsoft AI blog. Suleyman argues that the focus of AI development should shift from the fantasy of "fully autonomous general intelligence" to deploying controlled, focused, and auditable intelligent systems in specific high-value areas such as medical diagnosis, drug discovery, and clean energy breakthroughs. These "humanist superintelligences" do not aim for omniscience but are mission-driven to address humanity’s most urgent challenges, always under human supervision.
Notably, Suleyman has shown an unusual willingness to collaborate across the industry. He revealed that he is in close communication with executives at several leading AI companies, publicly praised Elon Musk for his "frank and open discussion on safety," and commended Sam Altman for being "efficient and highly driven." However, he reiterated that regardless of technical differences, "control must be the starting point and red line for all AI development."
In 2026, when AI capabilities are growing exponentially, Suleyman's warning comes like a bucket of cold water—reminding the fast-moving industry: true intelligence is not just about what it can do, but also about what it must never do. Only by first building a solid dam of control can the boat of alignment avoid capsizing in the unknown risks of the sea.
