- The paper introduces a novel framework that synthesizes 3D geometric textures via hierarchical GANs applied to local triangular mesh patches.
- It leverages convolutional neural networks to learn vertex displacements from multiscale mesh subdivisions, refining texture details iteratively.
- Experimental results validate the framework's ability to transfer textures across various mesh topologies while highlighting challenges with anisotropic structures.
Deep Geometric Texture Synthesis
Introduction
The paper "Deep Geometric Texture Synthesis" (2007.00074) introduces a novel framework for synthesizing geometric textures on 3D meshes using generative adversarial networks (GANs). The framework learns geometric texture statistics from local neighborhoods of triangular patches in a single 3D model and applies them to any arbitrary target mesh without requiring parameterization. This approach is fundamentally distinct from traditional 3D modeling techniques that rely on 2D displacement maps, as it enables vertex displacements in any direction, allowing for complex geometric textures to be synthesized across meshes of varying genus.
Method Overview
The core methodology involves leveraging convolutional neural networks (CNNs) adapted to meshes to model the distribution of geometric textures directly from input triangular meshes. By focusing on local triangular patches, the network learns to subdivide and generate offsets that reflect the local structure of the reference model. The model utilizes a hierarchical process, beginning with a low-resolution mesh and iteratively refining the geometry through subdivisions at multiple scales.
Figure 1: Method overview showing noise addition, feature extraction, and hierarchical mesh progression.
Hierarchical Training and Multiscale Mesh Generation
The multiscale approach starts by generating a series of mesh resolutions using an optimization strategy that begins with a low-resolution template. This template is repeatedly subdivided and optimized to match the reference mesh geometry. These multiscale training inputs serve as the groundwork for training a series of hierarchical GAN generators and discriminators, with each scale level capturing progressively finer details.
Figure 2: Multiscale training data generation strategy for optimizing meshes across resolutions.
The learning process involves the generator synthesizing vertex displacements that the discriminator evaluates against real mesh configurations, operating at specific model scales. The framework ensures invariance to transformations and mesh face order by using face-based convolutions, akin to existing frameworks like MeshCNN.
Texture Synthesis and Transfer
A key feature of this framework is its capability to generate and transfer textures to target meshes with differing topology from the reference mesh. This genus-agnostic framework avoids parameterization struggles faced in conventional methods. The synthesis is applied by passing the target mesh through the trained network's generator hierarchy, allowing the application of learned geometric textures across varied target shapes.
Figure 3: Transferring geometric texture from a reference to a target mesh.
Experimental Results
Experiments conducted on a variety of models demonstrate the framework's ability to synthesize textures from different target meshes while maintaining the reference's local statistical properties. Key strengths highlighted include hierarchical generation's ability to start from different template levels and the generative model's capacity to create variations of the reference texture via latent space interpolation.

Figure 4: Hierarchical texture scale space synthesizing textures on novel target shapes.
Limitations and Future Work
The framework exhibits certain limitations, such as its focus on local textures, making it less effective at capturing large, global structures. Additionally, its isometric assumption restricts its ability to handle anisotropic textures. Future work aims to address these limitations by potentially integrating directional field learning for texture directionality, as well as improving the mesh subdivision strategy to handle more complex geometries. The methodology also opens avenues for derivative works exploring color texture transfer and animated shape interpolation.
Figure 5: Current limitations in non-isometric texture transfer.
Conclusion
"Deep Geometric Texture Synthesis" presents a novel paradigm in geometric texture modeling, moving beyond 2D displacement limitations to a more flexible and powerful synthesis approach. It offers a transferable framework adaptable to diverse mesh topologies without requiring intricate parameterization. This advancement paves the way for broader applications in fields like computer graphics and 3D modeling, promising enhancements in the realism and variety of synthesized textures.