Inceptive: The AGI Editor of the Pre-Google Brain Era for Directing Drugs


At Davos 2026, DeepMind's CEO noted China's AI is catching up with the West, trailing by about 6 months, and praised DeepSeek R1 for its impressive performance that shook Silicon Valley.....
Elon Musk announces open-sourcing of X platform's new recommendation algorithm, using Grok's Transformer architecture. Aimed at enhancing transparency and iteration, it ranks content by predicting user interaction probabilities, though improvements are needed.....
The DeepSeek team launched the Engram module, introducing a 'conditional memory axis' into sparse large language models, aiming to address the issue of computational resource waste when traditional Transformers process repetitive knowledge. This module, as a complement to the mixture-of-experts model, integrates N-gram embedding technology into the model, improving efficiency in processing repetitive patterns.
NVIDIA launches Nemotron 3 series, combining Mamba and Transformer architectures for efficient long-context processing with reduced resource usage. Designed for AI agents handling complex tasks, it includes Nano, Super, and Ultra models. Nano is available now; Super and Ultra expected in H1 2026.....
Runway's latest model, Gen-4.5, defeated Google's Veo3 and OpenAI's Sora2Pro on the third-party blind testing platform Video Arena, becoming the first large model to ascend to the top by a small team. Its CEO emphasized the feasibility of focusing on research and rapid iteration, pointing out that a team of 100 people can challenge a trillion-dollar company not by budget, but by density. The model uses a self-developed space-time hybrid Transformer architecture, demonstrating breakthroughs in AI video generation by a small team.