DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs (2502.12455v2)
Abstract: As LLMs continue to scale, computational costs and resource consumption have emerged as significant challenges. While existing sparsification methods like pruning reduce computational overhead, they risk losing model knowledge through parameter removal. This paper proposes DSMoE (Dynamic Sparse Mixture-of-Experts), a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks. We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge based on input complexity. Additionally, we introduce a sparsity loss term to balance performance and computational efficiency. Extensive experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches across LLMing and downstream tasks, particularly excelling in generation tasks. Analysis reveals that DSMoE learns distinctive layerwise activation patterns, providing new insights for future MoE architecture design.
- Minxuan Lv (5 papers)
- Zhenpeng Su (17 papers)
- Leiyu Pan (6 papers)
- Yizhe Xiong (14 papers)
- Zijia Lin (43 papers)
- Hui Chen (298 papers)
- Wei Zhou (311 papers)
- Jungong Han (111 papers)
- Guiguang Ding (79 papers)
- Cheng Luo (70 papers)
- Di Zhang (231 papers)
- Kun Gai (125 papers)
- Songlin Hu (80 papers)