Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Pipeline MoE: A Flexible MoE Implementation with Pipeline Parallelism (2304.11414v1)

Published 22 Apr 2023 in cs.DC and cs.LG

Abstract: The Mixture of Experts (MoE) model becomes an important choice of LLMs nowadays because of its scalability with sublinear computational complexity for training and inference. However, existing MoE models suffer from two critical drawbacks, 1) tremendous inner-node and inter-node communication overhead introduced by all-to-all dispatching and gathering, and 2) limited scalability for the backbone because of the bound data parallel and expert parallel to scale in the expert dimension. In this paper, we systematically analyze these drawbacks in terms of training efficiency in the parallel framework view and propose a novel MoE architecture called Pipeline MoE (PPMoE) to tackle them. PPMoE builds expert parallel incorporating with tensor parallel and replaces communication-intensive all-to-all dispatching and gathering with a simple tensor index slicing and inner-node all-reduce. Besides, it is convenient for PPMoE to integrate pipeline parallel to further scale the backbone due to its flexible parallel architecture. Extensive experiments show that PPMoE not only achieves a more than $1.75\times$ speed up compared to existing MoE architectures but also reaches $90\%$ throughput of its corresponding backbone model that is $20\times$ smaller.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Xin Chen (457 papers)
  2. Hengheng Zhang (6 papers)
  3. Xiaotao Gu (32 papers)
  4. Kaifeng Bi (6 papers)
  5. Lingxi Xie (137 papers)
  6. Qi Tian (314 papers)
Citations (3)