The Zhitong Finance App learned that J.P. Morgan said that the release of DeepSeek V3.2 marks the second wave of “DeepSeek shock” in the Chinese AI market, which means that open source reasoning capabilities close to cutting-edge models can be obtained at a moderate price in China, which is beneficial to most stakeholders in the Chinese AI ecosystem, namely cloud operators, AI chip manufacturers, AI server manufacturers, AI intelligence platforms, and SaaS developers. Analyst Alex Yao said in the report that DeepSeek reduced the price of the model API by 30%-70%, while long-term contextual reasoning could save 6-10 times the workload. The beneficiaries include Alibaba (09988), Tencent (00700), Baidu (09888), China and Micro (688012.SH), Beifang Huachuang (002371.SZ), Huaqin Technology (603296.SH), and Inspur Information (000977.SZ).
On December 1, DeepSeek announced the release of the official DeepSEEK-v3.2 model. The goal of DeepSeek-v3.2 is to balance reasoning ability and output length, making it suitable for everyday use, such as question-and-answer scenarios and general agent task scenarios. In public inference benchmarking tests, DeepSeek—V3.2 reached the GPT-5 level, only slightly lower than Gemini-3.0-Pro; compared to Kimi-K2-Thinking, the output length of V3.2 was drastically reduced, significantly reducing computational overhead and user waiting time.
Unlike the limitations of previous versions of not being able to call tools in thinking mode, DeepSeek-v3.2 is the first model launched by the company to incorporate thinking into tool use, and supports both thinking mode and non-thinking mode tool calls. The company proposed a large-scale agent training data synthesis method, constructed a large number of “difficult to answer and easy to verify” reinforcement learning tasks (1,800+ environments, 85,000+ complex instructions), greatly improving the generalization ability of the model.
The previous model V3.1 was mainly optimized for Nvidia CUDA, while the new model V3.2/V3.2-Exp provides Day-0 support for Huawei Ascend, Cambrian, and Haiguang, and provides ready-made kernels for SGLang, VLLM, and other inference frameworks, marking a clear shift to domestic hardware autonomy.