Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Word-Level Semantic Representation via Dependency Structure for Expressive Text-to-Speech Synthesis (2104.06835v4)

Published 14 Apr 2021 in cs.CL, cs.SD, and eess.AS

Abstract: Exploiting rich linguistic information in raw text is crucial for expressive text-to-speech (TTS). As large scale pre-trained text representation develops, bidirectional encoder representations from Transformers (BERT) has been proven to embody semantic information and employed to TTS recently. However, original or simply fine-tuned BERT embeddings still cannot provide sufficient semantic knowledge that expressive TTS models should take into account. In this paper, we propose a word-level semantic representation enhancing method based on dependency structure and pre-trained BERT embedding. The BERT embedding of each word is reprocessed considering its specific dependencies and related words in the sentence, to generate more effective semantic representation for TTS. To better utilize the dependency structure, relational gated graph network (RGGN) is introduced to make semantic information flow and aggregate through the dependency structure. The experimental results show that the proposed method can further improve the naturalness and expressiveness of synthesized speeches on both Mandarin and English datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yixuan Zhou (30 papers)
  2. Changhe Song (17 papers)
  3. Jingbei Li (12 papers)
  4. Zhiyong Wu (171 papers)
  5. Yanyao Bian (6 papers)
  6. Dan Su (101 papers)
  7. Helen Meng (204 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.