Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Dynamic Language Group-Based MoE: Enhancing Code-Switching Speech Recognition with Hierarchical Routing (2407.18581v3)

Published 26 Jul 2024 in cs.CL and cs.AI

Abstract: The Mixture of Experts (MoE) approach is well-suited for multilingual and code-switching (CS) tasks due to its multi-expert architecture. This work introduces the DLG-MoE, a Dynamic Language Group-based MoE optimized for bilingual and CS scenarios. DLG-MoE operates based on a hierarchical routing mechanism. First, the language router explicitly models the language and dispatches the representations to the corresponding language expert groups. Subsequently, the unsupervised router within each language group implicitly models attributes beyond language, and coordinates expert routing and collaboration. The model achieves state-of-the-art (SOTA) performance while also having unparalleled flexibility. It supports different top-k inference and streaming capabilities, and can also prune the model parameters to obtain a monolingual sub-model. The Code will be released.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Hukai Huang (8 papers)
  2. Shenghui Lu (4 papers)
  3. Yahui Shan (4 papers)
  4. He Qu (4 papers)
  5. Wenhao Guan (13 papers)
  6. Qingyang Hong (29 papers)
  7. Lin Li (329 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets