Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FACIAL: Synthesizing Dynamic Talking Face with Implicit Attribute Learning (2108.07938v1)

Published 18 Aug 2021 in cs.CV

Abstract: In this paper, we propose a talking face generation method that takes an audio signal as input and a short target video clip as reference, and synthesizes a photo-realistic video of the target face with natural lip motions, head poses, and eye blinks that are in-sync with the input audio signal. We note that the synthetic face attributes include not only explicit ones such as lip motions that have high correlations with speech, but also implicit ones such as head poses and eye blinks that have only weak correlation with the input audio. To model such complicated relationships among different face attributes with input audio, we propose a FACe Implicit Attribute Learning Generative Adversarial Network (FACIAL-GAN), which integrates the phonetics-aware, context-aware, and identity-aware information to synthesize the 3D face animation with realistic motions of lips, head poses, and eye blinks. Then, our Rendering-to-Video network takes the rendered face images and the attention map of eye blinks as input to generate the photo-realistic output video frames. Experimental results and user studies show our method can generate realistic talking face videos with not only synchronized lip motions, but also natural head movements and eye blinks, with better qualities than the results of state-of-the-art methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chenxu Zhang (16 papers)
  2. Yifan Zhao (66 papers)
  3. Yifei Huang (71 papers)
  4. Ming Zeng (123 papers)
  5. Saifeng Ni (2 papers)
  6. Madhukar Budagavi (5 papers)
  7. Xiaohu Guo (30 papers)
Citations (118)

Summary

We haven't generated a summary for this paper yet.