Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models (2203.01104v4)

Published 2 Mar 2022 in cs.CL, cs.AI, cs.LG, and quant-ph

Abstract: Recently, Mixture-of-Experts (short as MoE) architecture has achieved remarkable success in increasing the model capacity of large-scale LLMs. However, MoE requires incorporating significantly more parameters than the base model being extended. In this paper, we propose building a parameter-efficient MoE architecture by sharing information among experts. We adopt the matrix product operator (MPO, a tensor decomposition from quantum many-body physics) to reconstruct the parameter matrix in the expert layer and increase model capacity for pre-trained LLMs by sharing parameters of the central tensor (containing the core information) among different experts while enabling the specificity through the auxiliary tensors (complementing the central tensor) of different experts. To address the unbalanced optimization issue, we further design the gradient mask strategy for the MPO-based MoE architecture. Extensive experiments based on T5 and GPT-2 show improved performance and efficiency of the pre-trained LLM (27.2x reduction in total parameters for the superior model performance, compared with the Switch Transformers). Our code is publicly available at https://github.com/RUCAIBox/MPOE.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ze-Feng Gao (24 papers)
  2. Peiyu Liu (27 papers)
  3. Wayne Xin Zhao (196 papers)
  4. Zhong-Yi Lu (153 papers)
  5. Ji-Rong Wen (299 papers)
Citations (21)
Github Logo Streamline Icon: https://streamlinehq.com

GitHub