Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextual Language Model Adaptation for Conversational Agents (1806.10215v4)

Published 26 Jun 2018 in cs.CL

Abstract: Statistical LLMs (LM) play a key role in Automatic Speech Recognition (ASR) systems used by conversational agents. These ASR systems should provide a high accuracy under a variety of speaking styles, domains, vocabulary and argots. In this paper, we present a DNN-based method to adapt the LM to each user-agent interaction based on generalized contextual information, by predicting an optimal, context-dependent set of LM interpolation weights. We show that this framework for contextual adaptation provides accuracy improvements under different possible mixture LM partitions that are relevant for both (1) Goal-oriented conversational agents where it's natural to partition the data by the requested application and for (2) Non-goal oriented conversational agents where the data can be partitioned using topic labels that come from predictions of a topic classifier. We obtain a relative WER improvement of 3% with a 1-pass decoding strategy and 6% in a 2-pass decoding framework, over an unadapted model. We also show up to a 15% relative improvement in recognizing named entities which is of significant value for conversational ASR systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Anirudh Raju (20 papers)
  2. Behnam Hedayatnia (27 papers)
  3. Linda Liu (10 papers)
  4. Ankur Gandhe (30 papers)
  5. Chandra Khatri (20 papers)
  6. Angeliki Metallinou (14 papers)
  7. Anu Venkatesh (10 papers)
  8. Ariya Rastrow (55 papers)
Citations (24)