Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adapting Long Context NLM for ASR Rescoring in Conversational Agents (2104.11070v2)

Published 21 Apr 2021 in cs.CL, cs.LG, and cs.SD

Abstract: Neural LLMs (NLM), when trained and evaluated with context spanning multiple utterances, have been shown to consistently outperform both conventional n-gram LLMs and NLMs that use limited context. In this paper, we investigate various techniques to incorporate turn based context history into both recurrent (LSTM) and Transformer-XL based NLMs. For recurrent based NLMs, we explore context carry over mechanism and feature based augmentation, where we incorporate other forms of contextual information such as bot response and system dialogue acts as classified by a Natural Language Understanding (NLU) model. To mitigate the sharp nearby, fuzzy far away problem with contextual NLM, we propose the use of attention layer over lexical metadata to improve feature based augmentation. Additionally, we adapt our contextual NLM towards user provided on-the-fly speech patterns by leveraging encodings from a large pre-trained masked LLM and performing fusion with a Transformer-XL based NLM. We test our proposed models using N-best rescoring of ASR hypotheses of task-oriented dialogues and also evaluate on downstream NLU tasks such as intent classification and slot labeling. The best performing model shows a relative WER between 1.6% and 9.1% and a slot labeling F1 score improvement of 4% over non-contextual baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Ashish Shenoy (13 papers)
  2. Sravan Bodapati (31 papers)
  3. Monica Sunkara (20 papers)
  4. Srikanth Ronanki (23 papers)
  5. Katrin Kirchhoff (36 papers)
Citations (21)