Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Modality Confidence Aware Training for Robust End-to-End Spoken Language Understanding (2307.12134v1)

Published 22 Jul 2023 in cs.CL, cs.SD, and eess.AS

Abstract: End-to-end (E2E) spoken language understanding (SLU) systems that generate a semantic parse from speech have become more promising recently. This approach uses a single model that utilizes audio and text representations from pre-trained speech recognition models (ASR), and outperforms traditional pipeline SLU systems in on-device streaming scenarios. However, E2E SLU systems still show weakness when text representation quality is low due to ASR transcription errors. To overcome this issue, we propose a novel E2E SLU system that enhances robustness to ASR errors by fusing audio and text representations based on the estimated modality confidence of ASR hypotheses. We introduce two novel techniques: 1) an effective method to encode the quality of ASR hypotheses and 2) an effective approach to integrate them into E2E SLU models. We show accuracy improvements on STOP dataset and share the analysis to demonstrate the effectiveness of our approach.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Suyoun Kim (22 papers)
  2. Akshat Shrivastava (25 papers)
  3. Duc Le (46 papers)
  4. Ju Lin (9 papers)
  5. Ozlem Kalinli (49 papers)
  6. Michael L. Seltzer (34 papers)
Citations (2)