According to TechCrunch, the dominant position of foundational models, once considered the "crown jewel" of the AI field, is facing unprecedented challenges. The focus of AI startups has quietly shifted towards task-specific customized models and user interfaces, viewing foundational models as interchangeable "commodities." This trend was particularly evident at the recent Boxworks conference, as the entire industry seems to be moving away from the pursuit of general artificial intelligence (AGI) and entering a new era of decentralization and specialization.

Paradigm Shift from "General" to "Vertical"
In the past, it was widely believed that mastering foundational models meant controlling the future of AI. However, this view is now being challenged. The article points out that the scalability benefits of pre-training are diminishing. This means that simply investing more money and computing power to train larger models no longer brings performance improvements as significant as in the early days. Therefore, the industry's focus has shifted to post-training and reinforcement learning.
AI developers have found that instead of spending billions of dollars on pre-training, it is more effective to focus on model fine-tuning and interface design to create better vertical applications. For example, the success of Anthropic's Claude Code shows that although foundational model companies still have advantages in specific fields, these advantages are no longer insurmountable moats.
The Moat Is Gone, AI Giants May Become "Coffee Bean Sellers"
The article quotes a founder's metaphor, vividly depicting the potential consequences of this shift: companies like OpenAI and Anthropic may become low-margin backend suppliers, "just like selling coffee beans to Starbucks."
As open-source alternatives emerge, foundational models will face price disadvantages in application-level competition. Startups can flexibly switch underlying models as needed, while users hardly notice. Martin Casado, a venture capitalist at a16z, pointed out that even though OpenAI was the first lab to launch coding, image, and video generation models, it lost to its competitors in these areas. He concluded: "There is no inherent moat in the AI technology stack."
Giants Still Have Advantages, the Future Remains Uncertain
Despite this, foundational model companies are not without defenses. They still have strong brand recognition, well-established infrastructure, and substantial financial resources. OpenAI's consumer business may be harder to replicate than its coding business, and new advantages may emerge as the industry matures. Moreover, if the competition for general intelligence leads to breakthroughs in fields such as drug discovery or materials science, it could completely reshape people's perception of the value of AI models.
