Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Geometric Texture Synthesis

Published 30 Jun 2020 in cs.GR, cs.CV, and cs.LG | (2007.00074v1)

Abstract: Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.

Citations (61)

Summary

  • The paper introduces a novel framework that synthesizes 3D geometric textures via hierarchical GANs applied to local triangular mesh patches.
  • It leverages convolutional neural networks to learn vertex displacements from multiscale mesh subdivisions, refining texture details iteratively.
  • Experimental results validate the framework's ability to transfer textures across various mesh topologies while highlighting challenges with anisotropic structures.

Deep Geometric Texture Synthesis

Introduction

The paper "Deep Geometric Texture Synthesis" (2007.00074) introduces a novel framework for synthesizing geometric textures on 3D meshes using generative adversarial networks (GANs). The framework learns geometric texture statistics from local neighborhoods of triangular patches in a single 3D model and applies them to any arbitrary target mesh without requiring parameterization. This approach is fundamentally distinct from traditional 3D modeling techniques that rely on 2D displacement maps, as it enables vertex displacements in any direction, allowing for complex geometric textures to be synthesized across meshes of varying genus.

Method Overview

The core methodology involves leveraging convolutional neural networks (CNNs) adapted to meshes to model the distribution of geometric textures directly from input triangular meshes. By focusing on local triangular patches, the network learns to subdivide and generate offsets that reflect the local structure of the reference model. The model utilizes a hierarchical process, beginning with a low-resolution mesh and iteratively refining the geometry through subdivisions at multiple scales. Figure 1

Figure 1: Method overview showing noise addition, feature extraction, and hierarchical mesh progression.

Hierarchical Training and Multiscale Mesh Generation

The multiscale approach starts by generating a series of mesh resolutions using an optimization strategy that begins with a low-resolution template. This template is repeatedly subdivided and optimized to match the reference mesh geometry. These multiscale training inputs serve as the groundwork for training a series of hierarchical GAN generators and discriminators, with each scale level capturing progressively finer details. Figure 2

Figure 2: Multiscale training data generation strategy for optimizing meshes across resolutions.

The learning process involves the generator synthesizing vertex displacements that the discriminator evaluates against real mesh configurations, operating at specific model scales. The framework ensures invariance to transformations and mesh face order by using face-based convolutions, akin to existing frameworks like MeshCNN.

Texture Synthesis and Transfer

A key feature of this framework is its capability to generate and transfer textures to target meshes with differing topology from the reference mesh. This genus-agnostic framework avoids parameterization struggles faced in conventional methods. The synthesis is applied by passing the target mesh through the trained network's generator hierarchy, allowing the application of learned geometric textures across varied target shapes. Figure 3

Figure 3: Transferring geometric texture from a reference to a target mesh.

Experimental Results

Experiments conducted on a variety of models demonstrate the framework's ability to synthesize textures from different target meshes while maintaining the reference's local statistical properties. Key strengths highlighted include hierarchical generation's ability to start from different template levels and the generative model's capacity to create variations of the reference texture via latent space interpolation. Figure 4

Figure 4

Figure 4: Hierarchical texture scale space synthesizing textures on novel target shapes.

Limitations and Future Work

The framework exhibits certain limitations, such as its focus on local textures, making it less effective at capturing large, global structures. Additionally, its isometric assumption restricts its ability to handle anisotropic textures. Future work aims to address these limitations by potentially integrating directional field learning for texture directionality, as well as improving the mesh subdivision strategy to handle more complex geometries. The methodology also opens avenues for derivative works exploring color texture transfer and animated shape interpolation. Figure 5

Figure 5: Current limitations in non-isometric texture transfer.

Conclusion

"Deep Geometric Texture Synthesis" presents a novel paradigm in geometric texture modeling, moving beyond 2D displacement limitations to a more flexible and powerful synthesis approach. It offers a transferable framework adaptable to diverse mesh topologies without requiring intricate parameterization. This advancement paves the way for broader applications in fields like computer graphics and 3D modeling, promising enhancements in the realism and variety of synthesized textures.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.