USTC and the Fengshenbang Team Release ChiMed-GPT, a Large Model in the Medical Field


On April 27th, the Qwen APP launched a grey test of the video model HappyHorse. Users can experience it by clicking the button at the bottom of the homepage. This model excels in narrative ability, audio-visual synchronization, and style diversity. During the internal testing period, a large number of TVB Hong Kong style, CCTV Three Kingdoms style, and old movie style short films were generated. Users can create similar videos with a single prompt. It is particularly skilled in producing plot-based videos, requiring only a simple description to automatically generate multi-scene content.
South Korea's government signed an MOU with Google's DeepMind to collaborate on AI research, talent development, and responsible use. A key initiative is the National Science AI Research Center launching in May, targeting breakthroughs in eight fields including biology, meteorology, and climate.....
On April 27, the National Development and Reform Commission announced that it has legally prohibited foreign acquisition of the general artificial intelligence platform Manus project, and requested the transaction to be revoked. As the world's first general AI entity, Manus has shown strong performance since its launch in March 2025, processing over 14.7 quadrillion tokens by early December and creating more than 80 million virtual computers. Previously, Manus had announced on December 30, 2025, its intention to join Meta.
Recently, 73 special effects scenes in the Amazon series 'The David Dynasty' were completed by generative AI, with the technology coming from the Chinese company Kuaishou, saving the production a significant amount of location and post-production costs. This case shows that AI video generation technology is accelerating its penetration into film production, triggering industry attention on costs, efficiency, and traditional work models.
OpenAI released the Privacy Filter model, designed to help developers anonymize personally identifiable information (PII) in text. The model has 150 million parameters and uses a Mixture of Experts (MoE) design, and is open-sourced on Hugging Face and GitHub under the Apache 2.0 license. Its core advantage lies in deep language understanding capabilities, enabling it to identify sensitive information in unstructured text through context, surpassing traditional rule-based methods.