Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speech-language Pre-training for End-to-end Spoken Language Understanding (2102.06283v1)

Published 11 Feb 2021 in cs.CL, cs.SD, and eess.AS

Abstract: End-to-end (E2E) spoken language understanding (SLU) can infer semantics directly from speech signal without cascading an automatic speech recognizer (ASR) with a natural language understanding (NLU) module. However, paired utterance recordings and corresponding semantics may not always be available or sufficient to train an E2E SLU model in a real production environment. In this paper, we propose to unify a well-optimized E2E ASR encoder (speech) and a pre-trained LLM encoder (language) into a transformer decoder. The unified speech-language pre-trained model (SLP) is continually enhanced on limited labeled data from a target domain by using a conditional masked LLM (MLM) objective, and thus can effectively generate a sequence of intent, slot type, and slot value for given input speech in the inference. The experimental results on two public corpora show that our approach to E2E SLU is superior to the conventional cascaded method. It also outperforms the present state-of-the-art approaches to E2E SLU with much less paired data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yao Qian (37 papers)
  2. Ximo Bian (1 paper)
  3. Yu Shi (153 papers)
  4. Naoyuki Kanda (61 papers)
  5. Leo Shen (2 papers)
  6. Zhen Xiao (24 papers)
  7. Michael Zeng (76 papers)
Citations (42)