Inceptive: The AGI Editor of the Pre-Google Brain Era for Directing Drugs


NVIDIA launches Nemotron 3 series, combining Mamba and Transformer architectures for efficient long-context processing with reduced resource usage. Designed for AI agents handling complex tasks, it includes Nano, Super, and Ultra models. Nano is available now; Super and Ultra expected in H1 2026.....
Runway's latest model, Gen-4.5, defeated Google's Veo3 and OpenAI's Sora2Pro on the third-party blind testing platform Video Arena, becoming the first large model to ascend to the top by a small team. Its CEO emphasized the feasibility of focusing on research and rapid iteration, pointing out that a team of 100 people can challenge a trillion-dollar company not by budget, but by density. The model uses a self-developed space-time hybrid Transformer architecture, demonstrating breakthroughs in AI video generation by a small team.
Dahua Tech boosts Q3 net profit by 44% to 1.06B yuan, deploying 6B vision models in 16GB edge devices. Since 2019, its Transformer-based self-training system has evolved into V/M/L series for efficient edge AI.....
Databricks Co-founder Andy Konwinski warned that the US is yielding AI research leadership to China, which poses an existential threat to democracy. He pointed out that feedback from Berkeley and Stanford PhD students shows that about half of the notable new AI ideas in the past year have come from Chinese teams, a significant increase in proportion. Konwinski co-founded the venture capital firm Laude with his partner in 2024 and runs a nonprofit accelerator called Laud.
Recently, Li Guoqi and Xu Bo's team from the Institute of Automation, Chinese Academy of Sciences, jointly released the world's first large-scale brain-like spiking large model - SpikingBrain1.0. The model demonstrates astonishing speed in processing long texts, capable of processing ultra-long texts of 4 million tokens at more than 100 times the speed of current mainstream Transformer models, while requiring only 2% of the data. Current mainstream large language models, such as the GPT series, are generally based on Transformer architecture.