Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Dub Movies via Hierarchical Prosody Models (2212.04054v2)

Published 8 Dec 2022 in cs.CL, cs.SD, and eess.AS

Abstract: Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Gaoxiang Cong (5 papers)
  2. Liang Li (297 papers)
  3. Yuankai Qi (46 papers)
  4. Zhengjun Zha (24 papers)
  5. Qi Wu (323 papers)
  6. Wenyu Wang (75 papers)
  7. Bin Jiang (127 papers)
  8. Ming-Hsuan Yang (376 papers)
  9. Qingming Huang (168 papers)
Citations (15)