RWKV: Small Team Aims to Be Android of AI Era with Big Model

Source:

Source:
The robotics research team at Google DeepMind recently released a robotics project called RT-2. This project took 7 months to develop and uses a large model for training. RT-2 has capabilities such as symbol understanding, reasoning, and human recognition, and can think and complete tasks based on human instructions. By combining the large model with the robot's operational capabilities, RT-2 can accomplish tasks that involve logical leaps, such as from 'extinct animals' to 'plastic dinosaurs'. The results of this project performed well in various sub - category tests, with performance up to three times that of the previous generation of robot models. This research result demonstrates the potential of large models in robotics research and is expected to drive the development of robots in the future.
Tencent Yuanbao, a large language model AI application from Tencent, has achieved a breakthrough in the free App download rankings on Apple's App Store in China, surpassing DeepSeek to claim the top spot. Currently, the top five free apps are Tencent Yuanbao, DeepSeek, Personal Income Tax, Doubao, and Hongguo Short Videos.
QQ Browser has launched a new feature called "AI Essay Tutoring," designed to help students improve their writing skills throughout the writing process, rather than simply providing answers. The launch coincides with the start of the new school term, when many students are experimenting with AI tools for homework, sparking widespread concern among parents and society.
On March 3rd, Shanghai Secret Tower Network Technology Co., Ltd. announced that its AI search function has added a new "video" search module, further expanding its coverage of multimodal data. This new function, based on the analysis and understanding of hundreds of millions of video contents, helps users find the learning and entertainment video resources they need more efficiently.
Tongyi Lingma has announced the launch of its latest inference model, Qwen2.5-Max, offering developers powerful programming and mathematical capabilities. Trained on over 20 trillion tokens and incorporating a meticulously designed post-training scheme, Qwen2.5-Max demonstrates exceptional performance.