Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Speech2Video: Cross-Modal Distillation for Speech to Video Generation (2107.04806v1)

Published 10 Jul 2021 in cs.SD, cs.CV, eess.AS, and eess.IV

Abstract: This paper investigates a novel task of talking face video generation solely from speeches. The speech-to-video generation technique can spark interesting applications in entertainment, customer service, and human-computer-interaction industries. Indeed, the timbre, accent and speed in speeches could contain rich information relevant to speakers' appearance. The challenge mainly lies in disentangling the distinct visual attributes from audio signals. In this article, we propose a light-weight, cross-modal distillation method to extract disentangled emotional and identity information from unlabelled video inputs. The extracted features are then integrated by a generative adversarial network into talking face video clips. With carefully crafted discriminators, the proposed framework achieves realistic generation results. Experiments with observed individuals demonstrated that the proposed framework captures the emotional expressions solely from speeches, and produces spontaneous facial motion in the video output. Compared to the baseline method where speeches are combined with a static image of the speaker, the results of the proposed framework is almost indistinguishable. User studies also show that the proposed method outperforms the existing algorithms in terms of emotion expression in the generated videos.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shijing Si (32 papers)
  2. Jianzong Wang (144 papers)
  3. Xiaoyang Qu (41 papers)
  4. Ning Cheng (96 papers)
  5. Wenqi Wei (55 papers)
  6. Xinghua Zhu (6 papers)
  7. Jing Xiao (267 papers)
Citations (15)