Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-supervised Context-aware Style Representation for Expressive Speech Synthesis (2206.12559v1)

Published 25 Jun 2022 in cs.SD, cs.AI, cs.CL, and eess.AS

Abstract: Expressive speech synthesis, like audiobook synthesis, is still challenging for style representation learning and prediction. Deriving from reference audio or predicting style tags from text requires a huge amount of labeled data, which is costly to acquire and difficult to define and annotate accurately. In this paper, we propose a novel framework for learning style representation from abundant plain text in a self-supervised manner. It leverages an emotion lexicon and uses contrastive learning and deep clustering. We further integrate the style representation as a conditioned embedding in a multi-style Transformer TTS. Comparing with multi-style TTS by predicting style tags trained on the same dataset but with human annotations, our method achieves improved results according to subjective evaluations on both in-domain and out-of-domain test sets in audiobook speech. Moreover, with implicit context-aware style representation, the emotion transition of synthesized audio in a long paragraph appears more natural. The audio samples are available on the demo web.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yihan Wu (44 papers)
  2. Xi Wang (275 papers)
  3. Shaofei Zhang (7 papers)
  4. Lei He (120 papers)
  5. Ruihua Song (48 papers)
  6. Jian-Yun Nie (70 papers)
Citations (15)