DiscreteSLU: A Large Language Model with Self-Supervised Discrete Speech Units for Spoken Language Understanding (2406.09345v1)
Abstract: The integration of pre-trained text-based LLMs (LLM) with speech input has enabled instruction-following capabilities for diverse speech tasks. This integration requires the use of a speech encoder, a speech adapter, and an LLM, trained on diverse tasks. We propose the use of discrete speech units (DSU), rather than continuous-valued speech encoder outputs, that are converted to the LLM token embedding space using the speech adapter. We generate DSU using a self-supervised speech encoder followed by k-means clustering. The proposed model shows robust performance on speech inputs from seen/unseen domains and instruction-following capability in spoken question answering. We also explore various types of DSU extracted from different layers of the self-supervised speech encoder, as well as Mel frequency Cepstral Coefficients (MFCC). Our findings suggest that the ASR task and datasets are not crucial in instruction-tuning for spoken question answering tasks.
- Suwon Shon (31 papers)
- Kwangyoun Kim (18 papers)
- Yi-Te Hsu (7 papers)
- Prashant Sridhar (10 papers)
- Shinji Watanabe (416 papers)
- Karen Livescu (89 papers)