Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Integrating Pretrained ASR and LM to Perform Sequence Generation for Spoken Language Understanding (2307.11005v1)

Published 20 Jul 2023 in cs.CL, cs.SD, and eess.AS

Abstract: There has been an increased interest in the integration of pretrained speech recognition (ASR) and LLMs (LM) into the SLU framework. However, prior methods often struggle with a vocabulary mismatch between pretrained models, and LM cannot be directly utilized as they diverge from its NLU formulation. In this study, we propose a three-pass end-to-end (E2E) SLU system that effectively integrates ASR and LM subnetworks into the SLU formulation for sequence generation tasks. In the first pass, our architecture predicts ASR transcripts using the ASR subnetwork. This is followed by the LM subnetwork, which makes an initial SLU prediction. Finally, in the third pass, the deliberation subnetwork conditions on representations from the ASR and LM subnetworks to make the final prediction. Our proposed three-pass SLU system shows improved performance over cascaded and E2E SLU models on two benchmark SLU datasets, SLURP and SLUE, especially on acoustically challenging utterances.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Siddhant Arora (50 papers)
  2. Hayato Futami (24 papers)
  3. Yosuke Kashiwagi (29 papers)
  4. Emiru Tsunoo (34 papers)
  5. Brian Yan (40 papers)
  6. Shinji Watanabe (416 papers)
Citations (4)