Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeRF-AD: Neural Radiance Field with Attention-based Disentanglement for Talking Face Synthesis (2401.12568v1)

Published 23 Jan 2024 in cs.CV and cs.MM

Abstract: Talking face synthesis driven by audio is one of the current research hotspots in the fields of multidimensional signal processing and multimedia. Neural Radiance Field (NeRF) has recently been brought to this research field in order to enhance the realism and 3D effect of the generated faces. However, most existing NeRF-based methods either burden NeRF with complex learning tasks while lacking methods for supervised multimodal feature fusion, or cannot precisely map audio to the facial region related to speech movements. These reasons ultimately result in existing methods generating inaccurate lip shapes. This paper moves a portion of NeRF learning tasks ahead and proposes a talking face synthesis method via NeRF with attention-based disentanglement (NeRF-AD). In particular, an Attention-based Disentanglement module is introduced to disentangle the face into Audio-face and Identity-face using speech-related facial action unit (AU) information. To precisely regulate how audio affects the talking face, we only fuse the Audio-face with audio feature. In addition, AU information is also utilized to supervise the fusion of these two modalities. Extensive qualitative and quantitative experiments demonstrate that our NeRF-AD outperforms state-of-the-art methods in generating realistic talking face videos, including image quality and lip synchronization. To view video results, please refer to https://xiaoxingliu02.github.io/NeRF-AD.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. “Expressive talking head generation with granular audio-visual control,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3377–3386.
  2. “A lip sync expert is all you need for speech to lip generation in the wild,” in Proceedings of the 28th ACM international conference on multimedia, 2020, pp. 484–492.
  3. “Speech driven talking face generation from a single image and an emotion condition,” IEEE Transactions on Multimedia, vol. 24, pp. 3480–3490, 2022.
  4. “Flow-guided one-shot talking face generation with a high-resolution audio-visual dataset,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3661–3670.
  5. “Selftalk: A self-supervised commutative training diagram to comprehend 3d talking faces,” arXiv preprint arXiv:2306.10799, 2023.
  6. “Talking face generation with expression-tailored generative adversarial network,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 1716–1724.
  7. “Learning dynamic facial radiance fields for few-shot talking head synthesis,” in Proceedings of the European conference on computer vision. Springer, 2022, pp. 666–682.
  8. “Ad-nerf: Audio driven neural radiance fields for talking head synthesis,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5784–5794.
  9. “Dfa-nerf: personalized talking head generation via disentangled face attributes neural rendering,” arXiv preprint arXiv:2201.00791, 2022.
  10. “Geneface: Generalized and high-fidelity audio-driven 3d talking face synthesis,” in The Eleventh International Conference on Learning Representations, 2023.
  11. “Nerf: Representing scenes as neural radiance fields for view synthesis,” in Proceedings of the European conference on computer vision. Springer, 2020, pp. 405–421.
  12. “One-shot high-fidelity talking-head synthesis with deformable neural radiance field,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 17969–17978.
  13. P. Ekman and W. V. Friesen, “Facial action coding system (facs): a technique for the measurement of facial actions,” Rivista Di Psichiatria, vol. 47, no. 2, pp. 126–38, 1978.
  14. “Region based adversarial synthesis of facial action units,” in MultiMedia Modeling: 26th International Conference, MMM 2020, Daejeon, South Korea, January 5–8, 2020, Proceedings, Part II 26. Springer, 2020, pp. 514–526.
  15. “Talking head generation with audio and speech related facial action units,” arXiv preprint arXiv:2110.09951, 2021.
  16. “Ganimation: Anatomically-aware facial animation from a single image,” in Proceedings of the European conference on computer vision, 2018, pp. 818–833.
  17. “Sargan: Spatial attention-based residuals for facial expression manipulation,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  18. “Improved training of wasserstein gans,” Advances in neural information processing systems, vol. 30, 2017.
  19. “Deep speech: Scaling up end-to-end speech recognition,” arXiv preprint arXiv:1412.5567, 2014.
  20. “Openface 2.0: Facial behavior analysis toolkit,” in 2018 13th IEEE international conference on automatic face & gesture recognition. IEEE, 2018, pp. 59–66.
  21. “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth international conference on 3D vision. Ieee, 2016, pp. 565–571.
  22. “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  23. “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595.
  24. “Lip movements generation at a glance,” in Proceedings of the European conference on computer vision, 2018, pp. 520–535.
  25. “Real-time facial surface geometry from monocular video on mobile gpus,” arXiv preprint arXiv:1907.06724, 2019.
  26. “Nerfies: Deformable neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5865–5874.
Citations (3)

Summary

We haven't generated a summary for this paper yet.