Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Rhythm-controllable Attention with High Robustness for Long Sentence Speech Synthesis (2306.02593v1)

Published 5 Jun 2023 in cs.AI

Abstract: Regressive Text-to-Speech (TTS) system utilizes attention mechanism to generate alignment between text and acoustic feature sequence. Alignment determines synthesis robustness (e.g, the occurence of skipping, repeating, and collapse) and rhythm via duration control. However, current attention algorithms used in speech synthesis cannot control rhythm using external duration information to generate natural speech while ensuring robustness. In this study, we propose Rhythm-controllable Attention (RC-Attention) based on Tracotron2, which improves robustness and naturalness simultaneously. Proposed attention adopts a trainable scalar learned from four kinds of information to achieve rhythm control, which makes rhythm control more robust and natural, even when synthesized sentences are extremely longer than training corpus. We use word errors counting and AB preference test to measure robustness of proposed method and naturalness of synthesized speech, respectively. Results shows that RC-Attention has the lowest word error rate of nearly 0.6%, compared with 11.8% for baseline system. Moreover, nearly 60% subjects prefer to the speech synthesized with RC-Attention to that with Forward Attention, because the former has more natural rhythm.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Dengfeng Ke (12 papers)
  2. Yayue Deng (9 papers)
  3. Yukang Jia (2 papers)
  4. Jinlong Xue (9 papers)
  5. Qi Luo (61 papers)
  6. Ya Li (79 papers)
  7. Jianqing Sun (5 papers)
  8. Jiaen Liang (8 papers)
  9. Binghuai Lin (20 papers)

Summary

We haven't generated a summary for this paper yet.