Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decoupling Speaker-Independent Emotions for Voice Conversion Via Source-Filter Networks (2110.01164v1)

Published 4 Oct 2021 in eess.AS, cs.LG, cs.SD, and eess.SP

Abstract: Emotional voice conversion (VC) aims to convert a neutral voice to an emotional (e.g. happy) one while retaining the linguistic information and speaker identity. We note that the decoupling of emotional features from other speech information (such as speaker, content, etc.) is the key to achieving remarkable performance. Some recent attempts about speech representation decoupling on the neutral speech can not work well on the emotional speech, due to the more complex acoustic properties involved in the latter. To address this problem, here we propose a novel Source-Filter-based Emotional VC model (SFEVC) to achieve proper filtering of speaker-independent emotion features from both the timbre and pitch features. Our SFEVC model consists of multi-channel encoders, emotion separate encoders, and one decoder. Note that all encoder modules adopt a designed information bottlenecks auto-encoder. Additionally, to further improve the conversion quality for various emotions, a novel two-stage training strategy based on the 2D Valence-Arousal (VA) space was proposed. Experimental results show that the proposed SFEVC along with a two-stage training strategy outperforms all baselines and achieves the state-of-the-art performance in speaker-independent emotional VC with nonparallel data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhaojie Luo (5 papers)
  2. Shoufeng Lin (4 papers)
  3. Rui Liu (320 papers)
  4. Jun Baba (11 papers)
  5. Yuichiro Yoshikawa (12 papers)
  6. Ishiguro Hiroshi (2 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.