Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generating Human Readable Transcript for Automatic Speech Recognition with Pre-trained Language Model (2102.11114v1)

Published 22 Feb 2021 in cs.CL, cs.SD, and eess.AS

Abstract: Modern Automatic Speech Recognition (ASR) systems can achieve high performance in terms of recognition accuracy. However, a perfectly accurate transcript still can be challenging to read due to disfluency, filter words, and other errata common in spoken communication. Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline. In this work, we propose an ASR post-processing model that aims to transform the incorrect and noisy ASR output into a readable text for humans and downstream tasks. We leverage the Metadata Extraction (MDE) corpus to construct a task-specific dataset for our study. Since the dataset is small, we propose a novel data augmentation method and use a two-stage training strategy to fine-tune the RoBERTa pre-trained model. On the constructed test set, our model outperforms a production two-step pipeline-based post-processing method by a large margin of 13.26 on readability-aware WER (RA-WER) and 17.53 on BLEU metrics. Human evaluation also demonstrates that our method can generate more human-readable transcripts than the baseline method.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Junwei Liao (12 papers)
  2. Yu Shi (153 papers)
  3. Ming Gong (246 papers)
  4. Linjun Shou (53 papers)
  5. Sefik Eskimez (1 paper)
  6. Liyang Lu (15 papers)
  7. Hong Qu (13 papers)
  8. Michael Zeng (76 papers)
Citations (9)