Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge (2110.09698v2)

Published 19 Oct 2021 in cs.SD, cs.CL, and eess.AS

Abstract: End-to-end TTS requires a large amount of speech/text paired data to cover all necessary knowledge, particularly how to pronounce different words in diverse contexts, so that a neural model may learn such knowledge accordingly. But in real applications, such high demand of training data is hard to be satisfied and additional knowledge often needs to be injected manually. For example, to capture pronunciation knowledge on languages without regular orthography, a complicated grapheme-to-phoneme pipeline needs to be built based on a large structured pronunciation lexicon, leading to extra, sometimes high, costs to extend neural TTS to such languages. In this paper, we propose a framework to learn to automatically extract knowledge from unstructured external resources using a novel Token2Knowledge attention module. The framework is applied to build a TTS model named Neural Lexicon Reader that extracts pronunciations from raw lexicon texts in an end-to-end manner. Experiments show the proposed model significantly reduces pronunciation errors in low-resource, end-to-end Chinese TTS, and the lexicon-reading capability can be transferred to other languages with a smaller amount of data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Mutian He (11 papers)
  2. Jingzhou Yang (2 papers)
  3. Lei He (121 papers)
  4. Frank K. Soong (17 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.