Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do as I mean, not as I say: Sequence Loss Training for Spoken Language Understanding (2102.06750v1)

Published 12 Feb 2021 in cs.CL and eess.AS

Abstract: Spoken language understanding (SLU) systems extract transcriptions, as well as semantics of intent or named entities from speech, and are essential components of voice activated systems. SLU models, which either directly extract semantics from audio or are composed of pipelined automatic speech recognition (ASR) and natural language understanding (NLU) models, are typically trained via differentiable cross-entropy losses, even when the relevant performance metrics of interest are word or semantic error rates. In this work, we propose non-differentiable sequence losses based on SLU metrics as a proxy for semantic error and use the REINFORCE trick to train ASR and SLU models with this loss. We show that custom sequence loss training is the state-of-the-art on open SLU datasets and leads to 6% relative improvement in both ASR and NLU performance metrics on large proprietary datasets. We also demonstrate how the semantic sequence loss training paradigm can be used to update ASR and SLU models without transcripts, using semantic feedback alone.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Milind Rao (13 papers)
  2. Pranav Dheram (7 papers)
  3. Gautam Tiwari (7 papers)
  4. Anirudh Raju (20 papers)
  5. Jasha Droppo (24 papers)
  6. Ariya Rastrow (55 papers)
  7. Andreas Stolcke (57 papers)
Citations (17)