Speech-Language Models with Decoupled Tokenizers and Multi-Token Prediction (2506.12537v1)
Abstract: Speech-LLMs (SLMs) offer a promising path toward unifying speech and text understanding and generation. However, challenges remain in achieving effective cross-modal alignment and high-quality speech generation. In this work, we systematically investigate the impact of key components (i.e., speech tokenizers, speech heads, and speaker modeling) on the performance of LLM-centric SLMs. We compare coupled, semi-decoupled, and fully decoupled speech tokenizers under a fair SLM framework and find that decoupled tokenization significantly improves alignment and synthesis quality. To address the information density mismatch between speech and text, we introduce multi-token prediction (MTP) into SLMs, enabling each hidden state to decode multiple speech tokens. This leads to up to 12$\times$ faster decoding and a substantial drop in word error rate (from 6.07 to 3.01). Furthermore, we propose a speaker-aware generation paradigm and introduce RoleTriviaQA, a large-scale role-playing knowledge QA benchmark with diverse speaker identities. Experiments demonstrate that our methods enhance both knowledge understanding and speaker consistency.
- Xiaoran Fan (23 papers)
- Zhichao Sun (12 papers)
- Yangfan Gao (1 paper)
- Jingfei Xiong (1 paper)
- Hang Yan (86 papers)
- Yifei Cao (4 papers)
- Jiajun Sun (17 papers)
- Shuo Li (179 papers)
- Zhihao Zhang (61 papers)
- Zhiheng Xi (37 papers)
- Yuhao Zhou (78 papers)
- Senjie Jin (10 papers)
- Changhao Jiang (7 papers)
- Junjie Ye (66 papers)
- Ming Zhang (313 papers)
- Rui Zheng (79 papers)
- Zhenhua Han (18 papers)
- Yunke Zhang (18 papers)
- Demei Yan (1 paper)
- Shaokang Dong (3 papers)