A shocking transformation is quietly taking place in the AI industry. Startups that were once mocked as "GPT packagers" are now highly sought after by investors. Meanwhile, tech giants that have invested billions of dollars to build foundational models are facing unprecedented challenges.

The core of this change lies in a fundamental shift in perception: Are foundational models really that important? The answer might surprise many. An increasing number of AI startups are beginning to see foundational models as replaceable commodities, as simple as replacing a car engine. Their focus has shifted to model customization for specific tasks and interface design, no longer placing blind faith in the underlying infrastructure.

This change in perspective is not without basis. The economies of scale from pretraining large foundational models are slowing down, and the initial advantages of teaching AI models with massive datasets have entered a phase of diminishing returns. Although AI continues to advance, the early advantages of ultra-large foundational models are fading, and industry attention has shifted to new breakthroughs such as post-training and reinforcement learning.

Metaverse, Sci-Fi, Cyberpunk Painting (4) Large Models

If you want to build better AI coding tools, instead of spending billions on pretraining, focus on fine-tuning and interface design. The success of Anthropic's Claude Code is the best example. Although foundational model companies still perform well in these areas, their advantage is no longer as unshakable as before.

This change is fundamentally reshaping the competitive landscape of AI, weakening the traditional advantages of the largest AI labs. What we're seeing is no longer a race toward general artificial intelligence, but rather a flourishing of discrete businesses such as software development, enterprise data management, and image generation. Beyond first-mover advantages, building foundational models does not provide a clear competitive edge in these specific businesses.

Even worse, the richness of open-source alternatives means that if foundational models lose competitiveness at the application layer, they may lose pricing power. This would turn companies like OpenAI and Anthropic into low-margin commodity businesses, much like a founder told me, "like selling coffee beans to Starbucks."

This is a dramatic shift for the AI industry. Throughout the current AI boom, the success of AI has been closely tied to the success of companies that build foundational models, especially OpenAI, Anthropic, and Google. Supporting AI meant believing that these companies would become significant enterprises with generational impact due to AI's transformative effects. We can debate which company will win, but clearly, a foundational model company will eventually hold the key to the kingdom.

There were many reasons to support this view. For years, foundational model development was the only AI business, and rapid progress made their leadership seem unbeatable. Silicon Valley has long had a deep love for platform advantages, and people assumed that regardless of how AI models ultimately make money, the biggest profits would flow back to foundational model companies because they did the hardest work to replicate.

The past year has made this story more complex. Although there are many successful third-party AI services, they often use interchangeable foundational models. For startups, it no longer matters whether their products are based on GPT-5, Claude, or Gemini; they expect to be able to switch models during the launch process without end users noticing the difference.

Martin Casado from venture capital firm a16z pointed this out in a recent podcast: OpenAI was the first lab to launch programming models as well as image and video generation models, yet it lost to competitors in all three areas. Casado concluded, "As far as we know, there is no inherent moat in the AI technology stack."

Certainly, we should not completely rule out foundational model companies. They still have many lasting advantages, including brand awareness, infrastructure, and incredible cash reserves. OpenAI's consumer business may be harder to replicate than its coding business, and other advantages may emerge as the industry matures.

At the same time, however, the strategy of building increasingly larger foundational models seems less appealing than it was last year. Meta's billion-dollar spending spree is beginning to look extremely risky.

As the focus of AI competition shifts from underlying technology to application innovation, and as algorithms are seen as interchangeable commodities, the power structure of the entire AI industry is undergoing a profound change. Those who were once dismissed as "packagers" may be becoming the true winners of this technological revolution. Meanwhile, the tech giants who have invested heavily in building AI empires need to rethink their position in this rapidly changing world.