Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Problem of Text-To-Speech Model Selection for Synthetic Data Generation in Automatic Speech Recognition (2407.21476v1)

Published 31 Jul 2024 in cs.CL, cs.LG, cs.SD, and eess.AS

Abstract: The rapid development of neural text-to-speech (TTS) systems enabled its usage in other areas of natural language processing such as automatic speech recognition (ASR) or spoken language translation (SLT). Due to the large number of different TTS architectures and their extensions, selecting which TTS systems to use for synthetic data creation is not an easy task. We use the comparison of five different TTS decoder architectures in the scope of synthetic data generation to show the impact on CTC-based speech recognition training. We compare the recognition results to computable metrics like NISQA MOS and intelligibility, finding that there are no clear relations to the ASR performance. We also observe that for data generation auto-regressive decoding performs better than non-autoregressive decoding, and propose an approach to quantify TTS generalization capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nick Rossenbach (9 papers)
  2. Ralf Schlüter (73 papers)
  3. Sakriani Sakti (41 papers)
Citations (2)