Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ASR Error Correction and Domain Adaptation Using Machine Translation (2003.07692v1)

Published 13 Mar 2020 in eess.AS, cs.LG, cs.SD, and stat.ML

Abstract: Off-the-shelf pre-trained Automatic Speech Recognition (ASR) systems are an increasingly viable service for companies of any size building speech-based products. While these ASR systems are trained on large amounts of data, domain mismatch is still an issue for many such parties that want to use this service as-is leading to not so optimal results for their task. We propose a simple technique to perform domain adaptation for ASR error correction via machine translation. The machine translation model is a strong candidate to learn a mapping from out-of-domain ASR errors to in-domain terms in the corresponding reference files. We use two off-the-shelf ASR systems in this work: Google ASR (commercial) and the ASPIRE model (open-source). We observe 7% absolute improvement in word error rate and 4 point absolute improvement in BLEU score in Google ASR output via our proposed method. We also evaluate ASR error correction via a downstream task of Speaker Diarization that captures speaker style, syntax, structure and semantic improvements we obtain via ASR correction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Anirudh Mani (2 papers)
  2. Shruti Palaskar (14 papers)
  3. Nimshi Venkat Meripo (2 papers)
  4. Sandeep Konam (10 papers)
  5. Florian Metze (80 papers)
Citations (79)

Summary

We haven't generated a summary for this paper yet.