Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

You Do Not Need More Data: Improving End-To-End Speech Recognition by Text-To-Speech Data Augmentation (2005.07157v2)

Published 14 May 2020 in eess.AS, cs.CL, cs.LG, and cs.SD

Abstract: Data augmentation is one of the most effective ways to make end-to-end automatic speech recognition (ASR) perform close to the conventional hybrid approach, especially when dealing with low-resource tasks. Using recent advances in speech synthesis (text-to-speech, or TTS), we build our TTS system on an ASR training database and then extend the data with synthesized speech to train a recognition model. We argue that, when the training data amount is relatively low, this approach can allow an end-to-end model to reach hybrid systems' quality. For an artificial low-to-medium-resource setup, we compare the proposed augmentation with the semi-supervised learning technique. We also investigate the influence of vocoder usage on final ASR performance by comparing Griffin-Lim algorithm with our modified LPCNet. When applied with an external LLM, our approach outperforms a semi-supervised setup for LibriSpeech test-clean and only 33% worse than a comparable supervised setup. Our system establishes a competitive result for end-to-end ASR trained on LibriSpeech train-clean-100 set with WER 4.3% for test-clean and 13.5% for test-other.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Aleksandr Laptev (14 papers)
  2. Roman Korostik (5 papers)
  3. Aleksey Svischev (1 paper)
  4. Andrei Andrusenko (12 papers)
  5. Ivan Medennikov (12 papers)
  6. Sergey Rybin (1 paper)
Citations (60)