Galileo Launches New Tool to Explain AI Large Model Hallucination Phenomenon


Six departments in Beijing jointly issued measures to promote the upgrading of the medical device industry through data circulation and AI large models. Key initiatives include building high-quality medical data sets and improving data circulation policies to promote safe and compliant data applications, meeting the needs of enterprises and research institutions.
Tsinghua University published a study in "Nature Machine Intelligence", introducing the new concept of "ability density", challenging traditional AI evaluation standards. The research emphasizes that attention should not only be paid to the number of model parameters, but also to the level of intelligence within each parameter, questioning the scale rule that larger models are necessarily more capable.
The latest AI programming model rankings from LMArena show that Claude from Anthropic, GPT-5 from OpenAI, and Zhipu GLM-4.6 are tied for first place globally. These models, designed specifically for programming, can significantly improve the efficiency of code writing, debugging, and optimization, driving advancements in software development.
Alibaba Cloud's Tongyi series AI large models were first applied on a large scale in the e-commerce field during "Double 11", comprehensively optimizing consumer experience, business management, and traffic distribution for Taobao and Tmall. Among them, translation models such as Tongyi Qwen-MT played a core role in cross-language transactions.
The first AI large model investment competition organized by the US Nof1 institution concluded, with Alibaba's Tongyi Qianwen Qwen3-Max winning with a 22.32% return rate, demonstrating its leading capabilities in quantitative trading. The competition provided each of the six top models with $10,000 in initial funds to compete in a real trading environment on the Hyperliquid platform.