Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency (2007.12494v1)

Published 24 Jul 2020 in cs.CV

Abstract: Recent learning-based approaches, in which models are trained by single-view images have shown promising results for monocular 3D face reconstruction, but they suffer from the ill-posed face pose and depth ambiguity issue. In contrast to previous works that only enforce 2D feature constraints, we propose a self-supervised training architecture by leveraging the multi-view geometry consistency, which provides reliable constraints on face pose and depth estimation. We first propose an occlusion-aware view synthesis method to apply multi-view geometry consistency to self-supervised learning. Then we design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss. Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions. Comprehensive experiments on the face alignment and 3D face reconstruction benchmarks have demonstrated superiority over state-of-the-art methods. Our code and model are released in https://github.com/jiaxiangshang/MGCNet.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiaxiang Shang (6 papers)
  2. Tianwei Shen (20 papers)
  3. Shiwei Li (30 papers)
  4. Lei Zhou (126 papers)
  5. Mingmin Zhen (10 papers)
  6. Tian Fang (36 papers)
  7. Long Quan (35 papers)
Citations (127)

Summary

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency

The paper presents MGCNet, a self-supervised approach for monocular 3D face reconstruction by exploiting multi-view geometry consistency to address the inherent challenges in monocular approaches, particularly the ambiguity in face pose and depth estimation. Unlike previous methods that rely primarily on 2D features, this paper leverages multi-view constraints to provide more reliable supervision during training.

Key Contributions

  1. Self-Supervised Architecture: The authors introduce MGCNet, an end-to-end self-supervised framework for 3D face reconstruction and alignment. It is designed to address pose and depth ambiguities by employing multi-view geometry consistency. This is achieved through occlusion-aware view synthesis and novel consistency loss functions.
  2. Occlusion-Aware View Synthesis: A significant innovation of the work is the development of a differentiable covisible map that handles self-occlusion, thereby enhancing view synthesis. The map ensures that only pixels visible in both target and source views contribute to the consistency losses.
  3. Novel Loss Functions: The authors design three loss functions for multi-view geometry consistency: pixel consistency loss, depth consistency loss, and facial landmark-based epipolar loss. These losses collectively ensure improved 3DMM parameter consistency across views.
  4. Experimental Superiority: The approach is demonstrated to outperform state-of-the-art methods significantly. For face alignment, MGCNet improves normalized mean error (NME) by more than 12%, and for 3D face reconstruction, it achieves a substantial 17% reduction in root mean squared error (RMSE) on challenging datasets.

Methodology

The paper leverages the 3D Morphable Model (3DMM) to parameterize face shape and texture, integrating these into an end-to-end framework. The authors use a pinhole camera model and spherical harmonics for illumination modeling, crafting an architecture that synthesizes target views and reinforces consistency via multi-view losses.

Key steps include:

  • Co-visible Map Generation: By projecting covisible triangles, the method effectively identifies visible regions across multiple views to mitigate occlusion challenges.
  • Multi-View Consistency Losses: These include pixel consistency loss to minimize the error across synthesized target images, depth consistency loss ensuring robust depth alignment, and facial epipolar loss accommodating landmark-based pose errors with essential matrices.

Implications and Future Directions

The work sets a new benchmark for monocular 3D face reconstruction by effectively incorporating multi-view constraints, and establishing that such constraints profoundly mitigate the ambiguities found in monocular estimation tasks. The self-supervised nature of MGCNet is promising for reducing reliance on substantial amounts of annotated data.

Looking ahead, the implications for the broader AI community include the potential adaptation of this multi-view consistency framework to other domains, such as video-based facial analysis and real-time avatar generation. Future research might explore integrating more sophisticated face models or enhancing the robustness of the method under varying environmental conditions or more extreme facial expressions.

In summary, this paper offers a comprehensive approach that advances the capabilities of 3D face reconstruction from single images, addressing long-standing challenges in pose and depth ambiguity via innovative multi-view consistency techniques.