Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Applying LLMs for Rescoring N-best ASR Hypotheses of Casual Conversations: Effects of Domain Adaptation and Context Carry-over (2406.18972v1)

Published 27 Jun 2024 in eess.AS and cs.CL

Abstract: LLMs have been successfully applied for rescoring automatic speech recognition (ASR) hypotheses. However, their ability to rescore ASR hypotheses of casual conversations has not been sufficiently explored. In this study, we reveal it by performing N-best ASR hypotheses rescoring using Llama2 on the CHiME-7 distant ASR (DASR) task. Llama2 is one of the most representative LLMs, and the CHiME-7 DASR task provides datasets of casual conversations between multiple participants. We investigate the effects of domain adaptation of the LLM and context carry-over when performing N-best rescoring. Experimental results show that, even without domain adaptation, Llama2 outperforms a standard-size domain-adapted Transformer-LM, especially when using a long context. Domain adaptation shortens the context length needed with Llama2 to achieve its best performance, i.e., it reduces the computational cost of Llama2.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Atsunori Ogawa (15 papers)
  2. Naoyuki Kamo (13 papers)
  3. Kohei Matsuura (26 papers)
  4. Takanori Ashihara (28 papers)
  5. Takafumi Moriya (30 papers)
  6. Takatomo Kano (9 papers)
  7. Naohiro Tawara (20 papers)
  8. Marc Delcroix (94 papers)
Citations (3)