Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans (2505.05376v2)

Published 8 May 2025 in cs.CV

Abstract: We propose a novel method that reconstructs hair strands directly from colorless 3D scans by leveraging multi-modal hair orientation extraction. Hair strand reconstruction is a fundamental problem in computer vision and graphics that can be used for high-fidelity digital avatar synthesis, animation, and AR/VR applications. However, accurately recovering hair strands from raw scan data remains challenging due to human hair's complex and fine-grained structure. Existing methods typically rely on RGB captures, which can be sensitive to the environment and can be a challenging domain for extracting the orientation of guiding strands, especially in the case of challenging hairstyles. To reconstruct the hair purely from the observed geometry, our method finds sharp surface features directly on the scan and estimates strand orientation through a neural 2D line detector applied to the renderings of scan shading. Additionally, we incorporate a diffusion prior trained on a diverse set of synthetic hair scans, refined with an improved noise schedule, and adapted to the reconstructed contents via a scan-specific text prompt. We demonstrate that this combination of supervision signals enables accurate reconstruction of both simple and intricate hairstyles without relying on color information. To facilitate further research, we introduce Strands400, the largest publicly available dataset of hair strands with detailed surface geometry extracted from real-world data, which contains reconstructed hair strands from the scans of 400 subjects.

Summary

GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans

The paper "GeomHair: Reconstruction of Hair Strands from Colorless 3D Scans" introduces an innovative technique for accurately reconstructing hair strands directly from colorless 3D scans, addressing a significant gap in hair modeling within computer vision and graphics. The approach leverages a multi-modal hair orientation extraction mechanism, circumventing the need for RGB captures, which are traditionally sensitive to environmental conditions and challenges in extracting strand orientations.

Methodology Overview

The proposed method, GeomHair, focuses on reconstructing hairstyles by integrating sharp feature extraction from scan geometry with both 3D and 2D orientation detectors. Specifically, it identifies sharp surface features using Crest Lines on the mesh surface and employs a neural 2D line orientation detector applied to rendered shading from multiple perspectives. This dual approach enhances the reliability of strand orientation estimates.

To intelligently guide the synthesis of realistic hair strand geometry, GeomHair incorporates a diffusion prior grounded in synthetic hair scans from diverse datasets. This diffusion prior utilizes an improved noise schedule and scan-specific text prompts generated by a vision-LLM, elevating precision in complex hairstyles.

Results and Contributions

The GeomHair method represents a pivotal shift in the capability to reconstruct hair strands with higher robustness solely from geometric data, which is abundantly available through modern 3D scanning technologies such as structured light scanners. The authors provide Strands400, the largest publicly available dataset containing detailed hair strands reconstructed from scans of 400 subjects, promoting further research.

Through rigorous quantitative and qualitative evaluation, GeomHair delivers competitive performance compared to state-of-the-art methods reliant on RGB data, with particular merit in processing wavy and curly hairstyles. Its strand-based reconstruction exhibited more realistic strand alignment and coverage compared to the alternatives. However, some limitations were noted in handling highly curly hairstyles, which could be attributed to the scan's inability to capture certain intricate details.

Implications and Future Directions

Given the advancement GeomHair achieves in strand-based hair reconstruction without dependency on color data, the practical implications are manifold, ranging from digital avatar synthesis to interactive AR/VR experiences and gaming applications that demand high fidelity hair dynamics.

Theoretically, the success of GeomHair implies potential for broader adoption of geometry-based modeling paradigms, without reliance on color information, thus circumventing privacy concerns associated with handling of high-resolution RGB data. Future research may focus on refining the generative models and improving the handling of extremely detailed and textured hairstyles, as well as exploring applications in other domains where accurate geometric data is available.

Overall, the GeomHair presents a significant opportunity to enhance the realism and accessibility of hair modeling technologies, offering a robust framework for future exploration and innovation in 3D hair strand reconstruction.