Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-task RNN-T with Semantic Decoder for Streamable Spoken Language Understanding (2204.00558v1)

Published 1 Apr 2022 in cs.CL, cs.SD, and eess.AS

Abstract: End-to-end Spoken Language Understanding (E2E SLU) has attracted increasing interest due to its advantages of joint optimization and low latency when compared to traditionally cascaded pipelines. Existing E2E SLU models usually follow a two-stage configuration where an Automatic Speech Recognition (ASR) network first predicts a transcript which is then passed to a Natural Language Understanding (NLU) module through an interface to infer semantic labels, such as intent and slot tags. This design, however, does not consider the NLU posterior while making transcript predictions, nor correct the NLU prediction error immediately by considering the previously predicted word-pieces. In addition, the NLU model in the two-stage system is not streamable, as it must wait for the audio segments to complete processing, which ultimately impacts the latency of the SLU system. In this work, we propose a streamable multi-task semantic transducer model to address these considerations. Our proposed architecture predicts ASR and NLU labels auto-regressively and uses a semantic decoder to ingest both previously predicted word-pieces and slot tags while aggregating them through a fusion network. Using an industry scale SLU and a public FSC dataset, we show the proposed model outperforms the two-stage E2E SLU model for both ASR and NLU metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xuandi Fu (3 papers)
  2. Feng-Ju Chang (15 papers)
  3. Martin Radfar (17 papers)
  4. Kai Wei (30 papers)
  5. Jing Liu (526 papers)
  6. Grant P. Strimel (21 papers)
  7. Kanthashree Mysore Sathyendra (10 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.