Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images (2410.01801v1)

Published 2 Oct 2024 in cs.CV, cs.AI, and cs.GR

Abstract: We introduce FabricDiffusion, a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes. Existing approaches typically synthesize textures on the garment surface through 2D-to-3D texture mapping or depth-aware inpainting via generative models. Unfortunately, these methods often struggle to capture and preserve texture details, particularly due to challenging occlusions, distortions, or poses in the input image. Inspired by the observation that in the fashion industry, most garments are constructed by stitching sewing patterns with flat, repeatable textures, we cast the task of clothing texture transfer as extracting distortion-free, tileable texture materials that are subsequently mapped onto the UV space of the garment. Building upon this insight, we train a denoising diffusion model with a large-scale synthetic dataset to rectify distortions in the input texture image. This process yields a flat texture map that enables a tight coupling with existing Physically-Based Rendering (PBR) material generation pipelines, allowing for realistic relighting of the garment under various lighting conditions. We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos. Extensive experiments demonstrate that our model significantly outperforms state-to-the-art methods on both synthetic data and real-world, in-the-wild clothing images while generalizing to unseen textures and garment shapes.

Summary

  • The paper introduces FabricDiffusion, a method that leverages denoising diffusion models to transfer distortion-free, tileable texture maps to 3D garments.
  • The paper demonstrates that training on a large-scale synthetic dataset enables zero-shot generalization for accurate Physically-Based Rendering under varied lighting.
  • The paper highlights FabricDiffusion's potential for advancing digital fashion through applications in virtual try-on, gaming, and augmented reality.

High-Fidelity Texture Transfer for 3D Garments: Evaluating FabricDiffusion

The paper "FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images" introduces a novel approach to the complex challenge of transferring fabric textures from 2D clothing images to 3D garment models. The authors present FabricDiffusion, a method that leverages denoising diffusion models for creating distortion-free, tileable texture maps suitable for Physically-Based Rendering (PBR). This paper addresses the limitations of existing 2D-to-3D texture mapping techniques that struggle with maintaining texture fidelity amidst occlusions and distortions.

FabricDiffusion conceptualizes the texture transfer task similar to the fashion industry's approach using tileable sewing patterns. The authors utilize a diffusion model trained on a large-scale synthetic dataset that includes rich textures and material properties. They achieve high fidelity in texture maps, allowing for accurate rendering using PBR, thus facilitating realistic visualizations under varied lighting conditions. Key performance metrics show that FabricDiffusion significantly surpasses existing methods in preserving texture details on synthetic and real-world images.

The implications of this work extend to areas such as virtual try-on applications and e-commerce, where there is growing demand for high-quality 3D garment assets. Moreover, the ability to generate standard PBR materials means that this method can integrate seamlessly into existing digital workflows for garment rendering in gaming, virtual reality, and augmented reality.

In terms of contributions to the field, FabricDiffusion stands out due to its methodological rigor and practical applicability. It represents a leap forward in overcoming the challenges of texture distortion and lighting variations in garment rendering. The use of synthetic data, combined with a well-founded diffusion model, allows zero-shot generalization to real-world scenarios, broadening the potential for future work in nearly photorealistic digital fashion modeling. Future research could expand on handling more complex fabric types and increasing the diversity of material representations in datasets used for model training.

Summarily, FabricDiffusion exemplifies a cutting-edge approach in computational fabric rendering and texture transfer, laying a foundation for further advancements in the domain of digital garment representation. This methodology highlights the efficacy of integrating generative models with textiles and opens new avenues for research and practical applications within the AI and computer vision communities.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com