Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MoDE: A Mixture-of-Experts Model with Mutual Distillation among the Experts (2402.00893v1)

Published 31 Jan 2024 in cs.LG and cs.AI

Abstract: The application of mixture-of-experts (MoE) is gaining popularity due to its ability to improve model's performance. In an MoE structure, the gate layer plays a significant role in distinguishing and routing input features to different experts. This enables each expert to specialize in processing their corresponding sub-tasks. However, the gate's routing mechanism also gives rise to narrow vision: the individual MoE's expert fails to use more samples in learning the allocated sub-task, which in turn limits the MoE to further improve its generalization ability. To effectively address this, we propose a method called Mixture-of-Distilled-Expert (MoDE), which applies moderate mutual distillation among experts to enable each expert to pick up more features learned by other experts and gain more accurate perceptions on their original allocated sub-tasks. We conduct plenty experiments including tabular, NLP and CV datasets, which shows MoDE's effectiveness, universality and robustness. Furthermore, we develop a parallel study through innovatively constructing "expert probing", to experimentally prove why MoDE works: moderate distilling knowledge can improve each individual expert's test performances on their assigned tasks, leading to MoE's overall performance improvement.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhitian Xie (2 papers)
  2. Yinger Zhang (7 papers)
  3. Chenyi Zhuang (20 papers)
  4. Qitao Shi (3 papers)
  5. Zhining Liu (32 papers)
  6. Jinjie Gu (50 papers)
  7. Guannan Zhang (85 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets