Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference (2110.03742v1)

Published 24 Sep 2021 in cs.CL and cs.LG

Abstract: Sparse Mixture-of-Experts (MoE) has been a successful approach for scaling multilingual translation models to billions of parameters without a proportional increase in training computation. However, MoE models are prohibitively large and practitioners often resort to methods such as distillation for serving. In this work, we investigate routing strategies at different granularity (token, sentence, task) in MoE models to bypass distillation. Experiments on WMT and a web-scale dataset suggest that task-level routing (task-MoE) enables us to extract smaller, ready-to-deploy sub-networks from large sparse models. On WMT, our task-MoE with 32 experts (533M parameters) outperforms the best performing token-level MoE model (token-MoE) by +1.0 BLEU on average across 30 language pairs. The peak inference throughput is also improved by a factor of 1.9x when we route by tasks instead of tokens. While distilling a token-MoE to a smaller dense model preserves only 32% of the BLEU gains, our sub-network task-MoE, by design, preserves all the gains with the same inference cost as the distilled student model. Finally, when scaling up to 200 language pairs, our 128-expert task-MoE (13B parameters) performs competitively with a token-level counterpart, while improving the peak inference throughput by a factor of 2.6x.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sneha Kudugunta (14 papers)
  2. Yanping Huang (40 papers)
  3. Ankur Bapna (53 papers)
  4. Maxim Krikun (20 papers)
  5. Dmitry Lepikhin (10 papers)
  6. Minh-Thang Luong (32 papers)
  7. Orhan Firat (80 papers)
Citations (94)
X Twitter Logo Streamline Icon: https://streamlinehq.com