Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SMILE: Scaling Mixture-of-Experts with Efficient Bi-level Routing (2212.05191v1)

Published 10 Dec 2022 in cs.LG

Abstract: The mixture of Expert (MoE) parallelism is a recent advancement that scales up the model size with constant computational cost. MoE selects different sets of parameters (i.e., experts) for each incoming token, resulting in a sparsely-activated model. Despite several successful applications of MoE, its training efficiency degrades significantly as the number of experts increases. The routing stage in MoE relies on the efficiency of the All2All communication collective, which suffers from network congestion and has poor scalability. To mitigate these issues, we introduce SMILE, which exploits heterogeneous network bandwidth and splits a single-step routing into bi-level routing. Our experimental results show that the proposed method obtains a 2.5x speedup over Switch Transformer in terms of pretraining throughput on the Colossal Clean Crawled Corpus without losing any convergence speed.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chaoyang He (46 papers)
  2. Shuai Zheng (67 papers)
  3. Aston Zhang (48 papers)
  4. George Karypis (110 papers)
  5. Trishul Chilimbi (22 papers)
  6. Mahdi Soltanolkotabi (79 papers)
  7. Salman Avestimehr (116 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.