The competition for large model (LLM) computing power is moving deeper into more fundamental and specialized chip fields. On February 24, 2026, MatX, an AI chip startup founded by a senior engineer from Google's TPU team, announced that it had completed a B-round financing of $500 million (approximately 3.445 billion RMB).

This round of financing featured a stellar lineup, not only attracting strategic participation from semiconductor giants such as Alchip and Marvell, but also receiving heavy investments from multiple top-tier investment institutions.

Core Weapon: MatX One Chip

MatX's confidence in this round of financing stems from its next-generation processor currently under development — MatX One. This chip aims to solve the challenge of achieving both "high throughput" and "low latency" in large model inference:

  • Innovative Architecture: It uses a "partitionable systolic array" structure. This design cleverly combines the ultra-energy efficiency of a large array with the scheduling flexibility of a small array, maximizing hardware utilization.

  • Storage Black Tech: The MatX One integrates the extremely low latency of SRAM design and the long context processing capability of HBM (High Bandwidth Memory), breaking through traditional architectural storage bottlenecks.

  • Full Scenario Adaptation: Whether it's basic prefill, high-frequency decoding, or complex reinforcement learning training, the MatX One can provide industry-leading performance.

Business Prospects: Lower LLM Usage Costs

In today's computing power market, how to reduce the cost of token output is a common goal for all model manufacturers. IT Home cited MatX's official statement, saying that its product has the potential to achieve throughput efficiency comparable to or even exceeding traditional chips, thus significantly lowering the threshold for deploying and maintaining large models.


Industry Overview: The AI Chip Battle Is Intensifying

MatX's rise is just a microcosm of the global AI chip boom. Recent industry dynamics have been frequent:

  • SambaNova released its fifth-generation RDU chip and reached a deep collaboration with Intel.

  • Positron announced the Asimov chip, claiming that its energy efficiency per watt can reach five times that of NVIDIA's Rubin architecture.

  • Domestic Breakthrough: A research team in China recently successfully developed a flexible AI chip costing less than $1, which can withstand 40,000 folds, indicating new possibilities for wearable AI hardware.