Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice Conversion (2005.07025v3)

Published 13 May 2020 in cs.SD, cs.AI, cs.CL, and eess.AS

Abstract: Emotional voice conversion aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity. The prior studies on emotional voice conversion are mostly carried out under the assumption that emotion is speaker-dependent. We consider that there is a common code between speakers for emotional expression in a spoken language, therefore, a speaker-independent mapping between emotional states is possible. In this paper, we propose a speaker-independent emotional voice conversion framework, that can convert anyone's emotion without the need for parallel data. We propose a VAW-GAN based encoder-decoder structure to learn the spectrum and prosody mapping. We perform prosody conversion by using continuous wavelet transform (CWT) to model the temporal dependencies. We also investigate the use of F0 as an additional input to the decoder to improve emotion conversion performance. Experiments show that the proposed speaker-independent framework achieves competitive results for both seen and unseen speakers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Kun Zhou (217 papers)
  2. Berrak Sisman (49 papers)
  3. Mingyang Zhang (56 papers)
  4. Haizhou Li (286 papers)
Citations (50)

Summary

We haven't generated a summary for this paper yet.