Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video Editing (2108.05650v1)

Published 12 Aug 2021 in cs.CV

Abstract: Recent research has witnessed advances in facial image editing tasks including face swapping and face reenactment. However, these methods are confined to dealing with one specific task at a time. In addition, for video facial editing, previous methods either simply apply transformations frame by frame or utilize multiple frames in a concatenated or iterative fashion, which leads to noticeable visual flickers. In this paper, we propose a unified temporally consistent facial video editing framework termed UniFaceGAN. Based on a 3D reconstruction model and a simple yet efficient dynamic training sample selection mechanism, our framework is designed to handle face swapping and face reenactment simultaneously. To enforce the temporal consistency, a novel 3D temporal loss constraint is introduced based on the barycentric coordinate interpolation. Besides, we propose a region-aware conditional normalization layer to replace the traditional AdaIN or SPADE to synthesize more context-harmonious results. Compared with the state-of-the-art facial image editing methods, our framework generates video portraits that are more photo-realistic and temporally smooth.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Meng Cao (107 papers)
  2. Haozhi Huang (15 papers)
  3. Hao Wang (1124 papers)
  4. Xuan Wang (205 papers)
  5. Li Shen (363 papers)
  6. Sheng Wang (239 papers)
  7. Linchao Bao (43 papers)
  8. Zhifeng Li (74 papers)
  9. Jiebo Luo (355 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.