Recently, a research team from Tsinghua University published a thought-provoking research result in the international journal "Nature Machine Intelligence," introducing a new concept called "capability density." This study challenges the traditional view that when evaluating the power of large AI models, one should not only focus on the number of model parameters, or "size," but rather on the level of intelligence each parameter demonstrates, or "density."
Traditionally, the AI field has generally believed that larger models mean stronger capabilities. This "scale law" has driven the emergence of many powerful AI models in recent years. However, as the number of parameters increases, the cost of training and using the model also surges, which limits the industrial application of AI technology.

Research from Tsinghua University shows that increasing the "capability density" of AI models cannot simply rely on model compression. Researchers point out that forcibly compressing large models is like stuffing a thick dictionary into a small notebook, resulting in a loss of "intelligence." Therefore, the researchers emphasize that a more advanced "data + computing power + algorithm" system is needed to create "high-density" small models.
The study also found that the "capability density" of 51 open-source large models released in recent years has been growing at an exponential rate, doubling approximately every 3.5 months. This means that if currently a "brain" the size of a gym is needed to complete a complex task, in the near future, only a "brain" the size of a living room will be required. After another 3.5 months, this "brain" may shrink to the size of a backpack.
