On April 7, Readhub reported that DeepSeek V4 is undergoing intensive gray-scale testing. New interface displays by multiple programmers and social media bloggers show that the new generation of models has made breakthroughs in the underlying architecture, as well as significant upgrades in interaction logic and multimodal capabilities.

Key Highlights: A Three-Tiered New Functional Architecture

From the exposed test interface, DeepSeek V4 provides three core options, indicating a comprehensive evolution of its product matrix:

Lite Version (DeepSeek V4 Lite): Focuses on response speed, suitable for daily lightweight conversations.

Expert Version (DeepSeek V4): A deep logical reasoning mode, possibly built based on the "new memory architecture" mentioned in the paper signed by Liang Wenfeng.

Vision Version (DeepSeek V4 Vision): Marks the deep integration of multimodal capabilities, capable of directly handling image and video analysis tasks.

Technical Approach: Firm Supporter of "Domestic Chips"

The rise of DeepSeek is not only about algorithms, but also about deep adaptation to the local computing power ecosystem:

Prioritizing Domestic AI Chips: Reports say DeepSeek is developing at least two large models fully based on domestic AI chips and has already started the domestic chip purchasing frenzy.

Refusing Dependence: In previous test applications to US chip manufacturers, DeepSeek did not open up V4 model testing, but instead prioritized opening it to domestic companies for collaborative optimization.

Industry Expectations: Can It Challenge the Peak of Programming?

In addition to performance improvements, the market has more vertical field expectations for DeepSeek V4:

AI Programming Special Edition: Industry speculation suggests it will launch a version specifically designed for code generation and engineering implementation, directly competing with Anthropic's Mythos or OpenAI's GPT series.

Ultra-Long Text Processing: Continuing DeepSeek's previous advantages, V4 is expected to have the ability to process ultra-long texts of millions of Tokens at once.

Topic Tracking: A Long-Awaited Release Cycle

From the exposure of MODEL1 new architecture in January this year, to the release of OCR 2 model in February, and now frequent gray-scale tests, the release of DeepSeek V4 is imminent. As the release window approaches this month, this flagship model, which focuses on "native memory" and "domestic adaptation," may once again reshape the cost-performance ceiling of domestic large models.

Conclusion: An AI That Understands Chinese Computing Power Better

From code auto-validation to new visual interactions, DeepSeek is proving that domestic models can still achieve impressive evolutionary curves without relying on overseas top-tier chips.