DeepSeek-V3 Technical Report (2412.19437v2)
Abstract: We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) LLM with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. The model checkpoints are available at https://github.com/deepseek-ai/DeepSeek-V3.
Summary
- The paper presents DeepSeek-V3, a 671B-parameter Mixture-of-Experts language model that activates only 37B parameters per token for efficiency.
- The paper employs innovative techniques including Multi-Head Latent Attention and DeepSeekMoE to reduce inference cost and balance load without auxiliary loss.
- The paper demonstrates strong benchmark performance through pre-training on 14.8 trillion tokens and advanced strategies such as Multi-Token Prediction and Reinforcement Learning.
The paper presents DeepSeek-V3, a Mixture-of-Experts (MoE) LLM with 671B total parameters, where 37B parameters are activated for each token. The model is designed for efficient inference and training, achieved through Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were previously validated in DeepSeek-V2. Key innovations include an auxiliary-loss-free strategy for load balancing and a multi-token prediction training objective. The model was pre-trained on 14.8 trillion tokens and further refined through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL). The training costs totaled 2.788M H800 GPU hours.
Architecture
DeepSeek-V3 adopts a Transformer framework incorporating MLA and DeepSeekMoE.
- Multi-Head Latent Attention (MLA): This reduces Key-Value (KV) cache size during inference. The compression is achieved through the equations:
ctKV=WDKVht
$[\mathbf{k}_{t, 1}^{C};\mathbf{k}_{t, 2}^{C};...;\mathbf{k}_{t, n_{h}^{C}] = \mathbf{k}_{t}^{C} = W^{UK} \mathbf{c}_{t}^{KV}$
ktR=RoPE(WKRht)
kt,i=[kt,iC;ktR]
$[\mathbf{v}_{t, 1}^{C};\mathbf{v}_{t, 2}^{C};...;\mathbf{v}_{t, n_{h}^{C}] = \mathbf{v}_{t}^{C} = W^{UV} \mathbf{c}_{t}^{KV}$
where:
- d is the embedding dimension.
- nh is the number of attention heads.
- dh is the dimension per head.
- ht∈Rd is the attention input for the t-th token.
- ctKV∈Rdc is the compressed latent vector for keys and values.
- dc is the KV compression dimension.
- WDKV∈Rdc×d is the down-projection matrix.
- WUK,WUV∈Rdhnh×dc are the up-projection matrices for keys and values, respectively.
- WKR∈RdhR×d is the matrix to produce the decoupled key that carries Rotary Positional Embedding (RoPE).
- RoPE(⋅) denotes the operation that applies RoPE matrices.
- [⋅;⋅] denotes concatenation.
- DeepSeekMoE: This architecture employs finer-grained experts and isolates some experts as shared ones. The FFN output ht′ is computed as:
ht′=ut+i=1∑NsFFNi(s)(ut)+i=1∑Nrgi,tFFNi(r)(ut)
gi,t=∑j=1Nrgj,t′gi,t′
g<sup>′i,t</sup>={si,t,amp;si,t∈Topk(sj,t∣1≤j≤Nr,Kr), 0,amp;otherwise, $s_{i,t} = \operatorname{Sigmoid} \left( {\mathbf{u}<em>{t}<sup>{T}</sup> \mathbf{e}</em>{i} \right)$
Where:
- Ns and Nr denote the numbers of shared experts and routed experts, respectively.
- FFNi(s)(⋅) and FFNi(r)(⋅) denote the i-th shared expert and the i-th routed expert, respectively.
- Kr denotes the number of activated routed experts.
- gi,t is the gating value for the i-th expert.
- si,t is the token-to-expert affinity.
- ei is the centroid vector of the i-th routed expert.
- Topk(⋅,K) denotes the set comprising K highest scores among the affinity scores calculated for the t-th token and all routed experts.
DeepSeek-V3 also incorporates an auxiliary-loss-free load balancing strategy with a bias term bi for each expert: g<sup>′i,t</sup>={si,t,amp;si,t+bi∈Topk(sj,t+bj∣1≤j≤Nr,Kr), 0,amp;otherwise.A complementary sequence-wise balance loss is also used:L<em>Bal=α∑</em>i=1<sup>Nrfi</sup>Pi $f_i = \frac{N_r}{K_r T} \sum_{t=1}<sup>{T}</sup> \mathds{1} \left( s_{i,t} \in \operatorname{Topk} ( { s_{j, t} | 1 \le j \le N_r }, K_{r} ) \right)$ s<sup>′i,t</sup>=∑j=1<sup>Nr</sup>sj,tsi,t Pi=T1t=1∑<sup>Ts<sup>′i,t
Where: * α is a hyper-parameter. * $\mathds{1}(\cdot)$ denotes the indicator function. * T denotes the number of tokens in a sequence.
Node-limited routing ensures each token is sent to at most M nodes.
- Multi-Token Prediction (MTP): This extends the prediction scope to multiple future tokens. The k-th MTP module consists of a shared embedding layer Emb(⋅), a shared output head OutHead(⋅), a Transformer block TRMk(⋅), and a projection matrix Mk∈Rd×2d.
hi′k=Mk[RMSNorm(hik−1);RMSNorm(Emb(ti+k))]
h1:T−kk=TRMk(h1:T−k′k)
Pi+k+1k=OutHead(hik)
A cross-entropy loss $\mathcal{L}_{\text{MTP}^{k}$ is computed for each prediction depth:
$\mathcal{L}_{\text{MTP}^{k} = \operatorname{CrossEntropy}(P_{2 + k:T + 1}^{k}, t_{2 + k:T + 1}) = -\frac{1}{T} \sum_{i=2 + k}^{T + 1} \log P_i^k [t_i]$
The overall MTP loss is:
$\mathcal{L}_{\text{MTP} = \frac{\lambda}{D} \sum_{k=1}^{D} \mathcal{L}_{\text{MTP}^{k}$
Infrastructures
- Compute Clusters: The model was trained on a cluster of 2048 NVIDIA H800 GPUs.
- Training Framework: The HAI-LLM framework supports 16-way Pipeline Parallelism (PP), 64-way Expert Parallelism (EP), and ZeRO-1 Data Parallelism (DP). The DualPipe algorithm is used for efficient pipeline parallelism and overlaps computation and communication. Efficient cross-node all-to-all communication kernels are developed to utilize InfiniBand (IB) and NVLink bandwidths.
- FP8 Training: A mixed precision framework utilizing the FP8 data format was used. Tile-wise grouping with 1×Nc elements or block-wise grouping with Nc×Nc elements are used for fine-grained quantization.
- Inference and Deployment: The deployment strategy separates prefilling and decoding stages. Redundant experts are used to achieve load balancing among different experts in the MoE part.
Pre-Training
The pre-training corpus was optimized with mathematical and programming samples and expanded multilingual coverage. The training corpus consists of 14.8T tokens. The tokenizer employs Byte-level BPE with a vocabulary of 128K tokens. The number of Transformer layers was set to 61, and the hidden dimension was set to 7168. MLA uses 128 attention heads with a per-head dimension of 128. The KV compression dimension dc is 512, and the query compression dimension dc′ is 1536.
Long Context Extension
YaRN was applied for context extension in two phases to expand the context window from 4K to 32K and then to 128K.
Post-Training
Instruction-tuning datasets were curated to include 1.5M instances across multiple domains. Reasoning data was generated using an internal DeepSeek-R1 model, and non-reasoning data was generated using DeepSeek-V2.5. For RL, a rule-based Reward Model (RM) and a model-based RM were employed, along with Group Relative Policy Optimization (GRPO).
Evaluations
The base model was evaluated on benchmarks including MMLU, HellaSwag, PIQA, TriviaQA, GSM8K, HumanEval, and C-Eval. The chat model was evaluated on IFEval, FRAMES, LongBench v2, GPQA, SimpleQA, SWE-Bench Verified and LiveCodeBench. Ablation studies were conducted for the MTP strategy and the auxiliary-loss-free balancing strategy. The MTP strategy enhances model performance on most evaluation benchmarks. Batch-wise balancing imposes a flexible constraint, allowing experts to better specialize in different domains.
The team also offered suggestions on chip design to AI hardware vendors for communication and compute hardware.
Conclusion and Future Directions
DeepSeek-V3 achieves state-of-the-art performance among open-source models and is competitive with closed-source models. The team plans to invest in research to improve training and inference efficiency, explore new architectures, enhance data quality, and expand reasoning capabilities.
Related Papers
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models (2024)
- DeepSeek-VL: Towards Real-World Vision-Language Understanding (2024)
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model (2024)
- DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence (2024)
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding (2024)
GitHub
- GitHub - deepseek-ai/DeepSeek-V3 (9,137 stars)
Tweets
YouTube
HackerNews
- DeepSeek-V3 Technical Report (132 points, 34 comments)
- DeepSeek-V3 (124 points, 39 comments)
- DeepSeek-V3 Technical Report (3 points, 0 comments)
- Deepseek V3 is officially released (code, paper, benchmark results) (600 points, 123 comments)
- DeepSeek-V3 Technical Report (1 point, 0 comments)