Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Geometric Texture Synthesis (2007.00074v1)

Published 30 Jun 2020 in cs.GR, cs.CV, and cs.LG

Abstract: Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Amir Hertz (21 papers)
  2. Rana Hanocka (32 papers)
  3. Raja Giryes (156 papers)
  4. Daniel Cohen-Or (173 papers)
Citations (61)

Summary

  • The paper introduces a deep learning framework that uses hierarchical GANs to synthesize realistic geometric textures on 3D meshes.
  • It adapts convolutional operations to the irregular structure of triangular patches, enabling precise vertex displacement across varying mesh genus.
  • Results show versatile texture transfer and refined detail generation, advancing applications in computer graphics and virtual modeling.

Overview of "Deep Geometric Texture Synthesis"

The paper "Deep Geometric Texture Synthesis" proposes an innovative framework aimed at generating geometric textures on 3D meshes through a deep learning approach. This paper contributes significantly to the field of geometric deep learning, addressing a niche area of shape synthesis where irregular structures, specifically meshes, pose unique challenges. While recent advancements in generative models are highly visible within 2D image generation, the paper argues for a deep generative adversarial network (GAN) model tailored to 3D mesh synthesis due to the complex, unordered, and irregular nature of mesh topologies.

Proposed Methodology

The authors introduce a deep neural network framework that leverages convolutional neural networks (CNNs) adapted for the triangulation of 3D meshes. This method aims to learn geometric texture statistics from local neighborhoods, specifically triangular patches, of a reference 3D model. The approach involves calculating deep features on mesh faces and facilitating vertex displacement across both normal and tangential directions. This model is genus-oblivious, allowing for texture transfer between shapes of varying genus—a notable computational challenge in traditional shape synthesis methods.

The framework progresses from low-resolution meshes, enhancing detail iteratively through subdivisions in a hierarchical scale space. This hierarchical method allows for subtle and fine-grained geometric texture refinement, achieving synthesis results that would be highly complex to parameterize manually.

Mathematical Formulations and Architecture

The paper details the convolution mechanisms adapted for mesh-based data, highlighting how fixed-sized convolutional neighborhoods are established despite the irregular mesh structure. The authors describe symmetric face convolutions that maintain invariance to mesh orderings and explore how local geometric features are abstracted into deep features suited for generative tasks.

Through a series of hierarchical GAN networks, the generator progressively refines the mesh, learning from deep geometric features characteristic of the input data. The structure facilitates vertex displacement to synthesize local textures indistinguishable from reference statistics. This multiscale approach offers adaptive texture synthesis capacities without the need for complex mesh parameterizations or transformations.

Results and Implications

The paper presents comprehensive results showcasing the effective transfer of geometric textures across various mesh genres and resolutions. It demonstrates that the learned texture synthesis is versatile enough to adapt and generate realistic geometric features on arbitrary target meshes, maintaining the essence of the original reference textures.

The implications of this research are wide-reaching, particularly in areas demanding high-quality geometric mesh modeling like computer graphics, animation, and virtual environment constructions. By circumventing traditional parameterization constraints, this model enhances the flexibility and applicability of geometric deep learning practices in the field.

Conclusion and Future Directions

"Deep Geometric Texture Synthesis" forward the agenda for future research in automatic texture synthesis, focusing on learning from singular models to extrapolate complex geometric details. Future explorations could delve into addressing anisotropic textures or refining hierarchical learning procedures to accommodate non-uniform meshes. This paper paves the way for further advancements in generative modeling across complex mesh structures, broadening the potential of AI applications in creative industries and scientific modeling.

Youtube Logo Streamline Icon: https://streamlinehq.com