In the recent earnings call, Broadcom CEO Hock Tan revealed that the company received a $10 billion order from Anthropic in the previous quarter, specifically for providing Google's latest Tensor Processing Units (TPUs). Additionally, Tan mentioned that another $11 billion order was received from the same customer in the fourth quarter, expected to be delivered by the end of 2026. This brings the total TPU orders from Anthropic to $21 billion.

Broadcom also revealed that the company currently has $73 billion in outstanding orders for AI products, which are expected to be shipped over the next six quarters. TPUs are accelerators developed by Google specifically for AI workloads. Now in its seventh generation, TPUs are not only available to customers through Google Cloud but also support the training and deployment of internal systems at Google, including tasks related to the Gemini series models. Google is responsible for the architectural design of TPUs, while Broadcom converts these designs into manufacturable silicon and handles mass production. This partnership aligns with Google's ongoing strategy to control key AI hardware design while relying on semiconductor partners' manufacturing expertise.

As a long-term user of TPUs, Anthropic plans to significantly expand its infrastructure, aiming to deploy 1 million TPUs by 2026, along with over one gigawatt of new computing power. This will become one of the largest dedicated AI computing projects in the industry.

Other companies have also confirmed the use of TPUs, including Meta, Cohere, Apple, and Ilya Sutskever's new startup Super Safe Intelligence (SSI). According to The Information, Meta is evaluating the deployment of TPUs in its data centers starting in 2027.

The widespread adoption of TPUs is due to their efficient energy consumption and optimized performance for AI training and inference, gradually posing a competitive challenge to NVIDIA's GPU market share. Broadcom stated that its TPU/XPU (custom AI accelerators) customers have reached five, although the customer list is not fully disclosed, with confirmed customers including Google and Anthropic.

According to the latest analysis from SemiAnalysis, the peak floating-point operations per second (FLOPs) and memory bandwidth of TPU v7 are about 10% lower than NVIDIA's GB200 platform, but its total cost of ownership (TCO) performs better. SemiAnalysis estimates that the cost of deploying Ironwood internally at Google is 44% lower than deploying an equivalent NVIDIA system. Even when offering prices to external customers, TPU v7's TCO is about 30% lower than NVIDIA's GB200 and about 41% lower than the upcoming GB300. The analysis points out that if Anthropic achieves about 40% machine utilization on TPUs, the effective training cost per FLOP could be 50% to 60% lower than that of GB300-level GPU clusters.

Key Points:

🌟 Broadcom signed a $21 billion TPU order with Anthropic, helping its AI infrastructure expansion.  

💡 TPUs are widely used by multiple tech companies, challenging NVIDIA's GPU market due to their high performance and optimization for AI tasks.  

📉 TPU's total cost of ownership is lower than NVIDIA's similar products, showing its competitive advantage in the AI computing field.