Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Leveraging supplementary text data to kick-start automatic speech recognition system development with limited transcriptions (2302.04975v1)

Published 9 Feb 2023 in cs.CL

Abstract: Recent research using pre-trained transformer models suggests that just 10 minutes of transcribed speech may be enough to fine-tune such a model for automatic speech recognition (ASR) -- at least if we can also leverage vast amounts of text data (803 million tokens). But is that much text data necessary? We study the use of different amounts of text data, both for creating a lexicon that constrains ASR decoding to possible words (e.g. *dogz vs. dogs), and for training larger LLMs that bias the system toward probable word sequences (e.g. too dogs vs. two dogs). We perform experiments using 10 minutes of transcribed speech from English (for replicating prior work) and two additional pairs of languages differing in the availability of supplemental text data: Gronings and Frisian (~7.5M token corpora available), and Besemah and Nasal (only small lexica available). For all languages, we found that using only a lexicon did not appreciably improve ASR performance. For Gronings and Frisian, we found that lexica and LLMs derived from 'novel-length' 80k token subcorpora reduced the word error rate (WER) to 39% on average. Our findings suggest that where a text corpus in the upper tens of thousands of tokens or more is available, fine-tuning a transformer model with just tens of minutes of transcribed speech holds some promise towards obtaining human-correctable transcriptions near the 30% WER rule-of-thumb.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Nay San (7 papers)
  2. Martijn Bartelds (10 papers)
  3. Blaine Billings (1 paper)
  4. Ella de Falco (1 paper)
  5. Hendi Feriza (1 paper)
  6. Johan Safri (1 paper)
  7. Wawan Sahrozi (1 paper)
  8. Ben Foley (2 papers)
  9. Bradley McDonnell (2 papers)
  10. Dan Jurafsky (118 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.