Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Task-Specific Expert Pruning for Sparse Mixture-of-Experts (2206.00277v2)

Published 1 Jun 2022 in cs.LG and cs.AI

Abstract: The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pre-training and has achieved promising results due to its model capacity. However, with trillions of parameters, MoE is hard to be deployed on cloud or mobile environment. The inference of MoE requires expert parallelism, which is not hardware-friendly and communication expensive. Especially for resource-limited downstream tasks, such sparse structure has to sacrifice a lot of computing efficiency for limited performance gains. In this work, we observe most experts contribute scarcely little to the MoE fine-tuning and inference. We further propose a general method to progressively drop the non-professional experts for the target downstream task, which preserves the benefits of MoE while reducing the MoE model into one single-expert dense model. Our experiments reveal that the fine-tuned single-expert model could preserve 99.3% benefits from MoE across six different types of tasks while enjoying 2x inference speed with free communication cost.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Tianyu Chen (35 papers)
  2. Shaohan Huang (79 papers)
  3. Yuan Xie (188 papers)
  4. Binxing Jiao (18 papers)
  5. Daxin Jiang (138 papers)
  6. Haoyi Zhou (20 papers)
  7. Jianxin Li (128 papers)
  8. Furu Wei (291 papers)
Citations (24)