Segmenting Subtitles for Correcting ASR Segmentation Errors (2104.07868v1)
Abstract: Typical ASR systems segment the input audio into utterances using purely acoustic information, which may not resemble the sentence-like units that are expected by conventional machine translation (MT) systems for Spoken Language Translation. In this work, we propose a model for correcting the acoustic segmentation of ASR models for low-resource languages to improve performance on downstream tasks. We propose the use of subtitles as a proxy dataset for correcting ASR acoustic segmentation, creating synthetic acoustic utterances by modeling common error modes. We train a neural tagging model for correcting ASR acoustic segmentation and show that it improves downstream performance on MT and audio-document cross-language information retrieval (CLIR).
- David Wan (16 papers)
- Chris Kedzie (14 papers)
- Faisal Ladhak (31 papers)
- Elsbeth Turcan (7 papers)
- Petra Galuščáková (6 papers)
- Elena Zotkina (1 paper)
- Zhengping Jiang (19 papers)
- Peter Bell (60 papers)
- Kathleen McKeown (85 papers)