Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 89 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 221 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Geometry-Informed Neural Networks (2402.14009v3)

Published 21 Feb 2024 in cs.LG and cs.CV

Abstract: Geometry is a ubiquitous tool in computer graphics, design, and engineering. However, the lack of large shape datasets limits the application of state-of-the-art supervised learning methods and motivates the exploration of alternative learning strategies. To this end, we introduce geometry-informed neural networks (GINNs) -- a framework for training shape-generative neural fields without data by leveraging user-specified design requirements in the form of objectives and constraints. By adding diversity as an explicit constraint, GINNs avoid mode-collapse and can generate multiple diverse solutions, often required in geometry tasks. Experimentally, we apply GINNs to several validation problems and a realistic 3D engineering design problem, showing control over geometrical and topological properties, such as surface smoothness or the number of holes. These results demonstrate the potential of training shape-generative models without data, paving the way for new generative design approaches without large datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (85)
  1. SAL: Sign agnostic learning of shapes from raw data. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  2. SALD: sign agnostic learning with derivatives. In 9th International Conference on Learning Representations, ICLR 2021, 2021.
  3. Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940, 2016.
  4. DiGS: Divergence guided shape implicit neural representation for unoriented point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  19323–19332, 2022.
  5. Topology optimization: theory, methods, and applications. Springer Science & Business Media, 2003.
  6. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2):405–421, 2021.
  7. A Survey of Surface Reconstruction from Point Clouds. Computer Graphics Forum, pp.  27, 2016.
  8. Neural implicit shape editing using boundary sensitivity. In The Eleventh International Conference on Learning Representations. OpenReview.net, 2023.
  9. Describing shapes by geometrical-topological properties of real functions. ACM Comput. Surv., 40(4), oct 2008a. ISSN 0360-0300.
  10. Reeb graphs for shape analysis and applications. Theoretical Computer Science, 392(1):5–22, 2008b. ISSN 0304-3975. Computational Algebraic Geometry and Applications.
  11. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478, 2021.
  12. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
  13. One-shot generation of near-optimal topology through theory-driven machine learning. Computer-Aided Design, 109:12–21, 2019. ISSN 0010-4485.
  14. TOuNN: Topology optimization using neural networks. Structural and Multidisciplinary Optimization, 63(3):1135–1149, Mar 2021. ISSN 1615-1488.
  15. Mode regularized generative adversarial networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
  16. PaDGAN: Learning to Generate High-Quality Novel Designs. Journal of Mechanical Design, 143(3):031703, 11 2020. ISSN 1050-0472.
  17. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  5939–5948, 2019.
  18. Neural unsigned distance fields for implicit function learning. In Advances in Neural Information Processing Systems (NeurIPS), December 2020.
  19. A topological loss function for deep-learning based image segmentation using persistent homology. IEEE Transactions on Pattern Analysis & Machine Intelligence, 44(12):8766–8778, dec 2022. ISSN 1939-3539.
  20. Group equivariant convolutional networks. In International conference on machine learning, pp.  2990–2999. PMLR, 2016.
  21. A general theory of equivariant CNNs on homogeneous spaces. Advances in neural information processing systems, 32, 2019.
  22. Lagrangian neural networks. arXiv preprint arXiv:2003.04630, 2020.
  23. A review of some techniques for inclusion of domain-knowledge into deep neural networks. Scientific Reports, 12(1):1040, 2022.
  24. Douglas, J. Solution of the problem of plateau. Transactions of the American Mathematical Society, 33(1):263–321, 1931. ISSN 00029947.
  25. Incorporating second-order functional knowledge for better option pricing. In Leen, T., Dietterich, T., and Tresp, V. (eds.), Advances in Neural Information Processing Systems, volume 13. MIT Press, 2000.
  26. Enflo, K. Measuring one-dimensional diversity. Inquiry, 0(0):1–34, 2022.
  27. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD’96, pp.  226–231. AAAI Press, 1996.
  28. Physics-informed neural networks approach for 1d and 2d gray-scott systems. Advanced Modeling and Simulation in Engineering Sciences, 9(1):5, May 2022.
  29. Goldman, R. Curvature formulas for implicit curves and surfaces. Computer Aided Geometric Design, 22(7):632–658, 2005. ISSN 0167-8396. Geometric Modelling and Differential Geometry.
  30. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  31. Hamiltonian neural networks. Advances in neural information processing systems, 32, 2019.
  32. Implicit geometric regularization for learning shapes. In III, H. D. and Singh, A. (eds.), Proceedings of Machine Learning and Systems 2020, volume 119 of Proceedings of Machine Learning Research, pp.  3569–3579. PMLR, 13–18 Jul 2020.
  33. Structured mechanical models for robot learning and control. In Learning for Dynamics and Control, pp.  328–337. PMLR, 2020.
  34. HyperNetworks. In 5th International Conference on Learning Representations, ICLR 2017. OpenReview.net, 2017.
  35. PcDGAN: A continuous conditional diverse generative adversarial network for inverse design. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, pp.  606–616, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383325.
  36. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  37. Mc-lstm: Mass-conserving lstm. In International conference on machine learning, pp.  4275–4286. PMLR, 2021.
  38. Generative design by reinforcement learning: Enhancing the diversity of topology optimization designs. Computer-Aided Design, 146:103225, 2022. ISSN 0010-4485.
  39. Physics-informed machine learning. Nature Reviews Physics, 3(6):422–440, June 2021.
  40. Theory-guided data science: A new paradigm for scientific discovery from data. IEEE Transactions on knowledge and data engineering, 29(10):2318–2331, 2017.
  41. Alias-free generative adversarial networks. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp.  852–863. Curran Associates, Inc., 2021.
  42. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  43. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International Conference on Machine Learning, pp.  2747–2755. PMLR, 2018.
  44. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
  45. Kurochkin, S. V. Neural network with smooth activation functions and without bottlenecks is almost surely a morse function. Computational Mathematics and Mathematical Physics, 61(7):1162–1168, Jul 2021.
  46. Measuring diversity: the importance of species similarity. Ecology, 93(3):477–489, March 2012.
  47. Learning smooth neural functions via lipschitz regularization. In ACM SIGGRAPH 2022 Conference Proceedings, SIGGRAPH ’22, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450393379.
  48. Towards better gradient consistency for neural signed distance functions via level set alignment. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  17724–17734, Los Alamitos, CA, USA, jun 2023. IEEE Computer Society.
  49. Constructive solid geometry on neural signed distance fields. In SIGGRAPH Asia 2023 Conference Papers, SA ’23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400703157.
  50. Pattern formation in the gray–scott model. Nonlinear Analysis: Real World Applications, 5(1):105–121, 2004. ISSN 1468-1218.
  51. Modulated periodic activations for generalizable local functional representations. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp.  14194–14203, Los Alamitos, CA, USA, oct 2021. IEEE Computer Society.
  52. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  4460–4470, 2019.
  53. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021.
  54. Incorporating prior domain knowledge into deep neural networks. In 2018 IEEE international conference on big data (big data), pp.  36–45. IEEE, 2018.
  55. Euler characteristic transform based topological loss for reconstructing 3d images from single 2d slices. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.  571–579, 2023.
  56. Point-set distances for learning representations of 3d point clouds. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp.  10458–10467, Los Alamitos, CA, USA, oct 2021. IEEE Computer Society.
  57. Exploring differential geometry in neural implicits. Computers & Graphics, 108:49–60, 2022.
  58. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 365(6457):eaaw1147, 2019.
  59. DeepCurrents: Learning implicit representations of shapes with boundaries. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  60. DeepSDF: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  165–174, 2019.
  61. Measuring diversity. a review and an empirical analysis. European Journal of Operational Research, 289(2):515–532, 2021. ISSN 0377-2217.
  62. Pearson, J. E. Complex patterns in a simple system. Science, 261(5118):189–192, 1993.
  63. Perdikaris, P. A unifying framework for operator learning via neural fields, Dec 2023.
  64. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019.
  65. Rana, S. (ed.). Topological data structures for surfaces. John Wiley & Sons, Chichester, England, March 2004.
  66. Attention beats concatenation for conditioning neural fields. Trans. Mach. Learn. Res., 2023, 2022.
  67. Deep Generative Models in Engineering Design: A Review. Journal of Mechanical Design, 144(7):071704, 03 2022. ISSN 1050-0472.
  68. Variational inference with normalizing flows. In International conference on machine learning, pp.  1530–1538. PMLR, 2015.
  69. Variational annealing on graphs for combinatorial optimization. arXiv preprint arXiv:2311.14156, 2023.
  70. Neural stochastic poisson surface reconstruction. In SIGGRAPH Asia 2023 Conference Papers, SA ’23, New York, NY, USA, 2023. Association for Computing Machinery. ISBN 9798400703157.
  71. Topology optimization via machine learning and deep learning: a review. Journal of Computational Design and Engineering, 10(4):1736–1766, 07 2023. ISSN 2288-5048.
  72. Implicit neural representations with periodic activation functions. In Proc. NeurIPS, 2020.
  73. Fourier features let networks learn high frequency functions in low dimensional domains. Advances in Neural Information Processing Systems, 33:7537–7547, 2020.
  74. LatentPINNs: Generative physics-informed neural networks via a latent representation learning. arXiv preprint arXiv:2305.07671, 2023.
  75. Tomczak, J. M. Why deep generative modeling? In Deep Generative Modeling, pp.  1–12. Springer, 2021.
  76. Turing, A. A. M. The chemical basis of morphogenesis. Philos. Trans. R. Soc. Lond., 237(641):37–72, August 1952.
  77. Coulomb GANs: provably optimal nash equilibria via potential fields. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
  78. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  79. Informed machine learning–a taxonomy and survey of integrating prior knowledge into learning systems. IEEE Transactions on Knowledge and Data Engineering, 35(1):614–633, 2021.
  80. TopoGAN: A topology-aware generative adversarial network. In Proceedings of European Conference on Computer Vision, 2020.
  81. Computing minimal surfaces with differential forms. ACM Trans. Graph., 40(4):113:1–113:14, August 2021.
  82. An expert’s guide to training physics-informed neural networks, 2023.
  83. Neural fields in visual computing and beyond. Computer Graphics Forum, 2022. ISSN 1467-8659.
  84. StEik: Stabilizing the optimization of neural signed distance functions and finer shape representation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  85. NTopo: Mesh-free topology optimization using implicit neural representations. In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp.  10368–10381. Curran Associates, Inc., 2021.

Summary

  • The paper introduces GINNs, a novel approach leveraging geometric constraints for data-free generative modeling.
  • It employs neural fields and a connectedness loss to ensure diverse, structurally sound outputs across various design tasks.
  • Experimental validations in 2D and 3D scenarios demonstrate its robustness, advancing constrained optimization in generative design.

Geometry-Informed Neural Networks

This paper introduces the concept of Geometry-Informed Neural Networks (GINNs), a novel method that leverages geometric constraints to guide the learning process of neural networks in generative modeling tasks. GINNs uniquely operate without necessitating training data, instead relying on predefined geometric constraints to generate solutions. The authors propose an explicit diversity loss to curb mode collapse, a common issue in generative modeling where the generator produces less varied outputs than desired. This innovative approach is evaluated experimentally across a variety of two and three-dimensional scenarios, demonstrating its efficacy and robustness.

Key Contributions

  1. Introduction of GINNs: The paper presents an overarching framework for GINNs, which are designed to produce solutions constrained by geometric properties without relying on empirical data. This framework is particularly suited for applications where data is scarce but geometric constraints and objectives are well-defined.
  2. Neural Fields as Representation: The authors advocate for the use of neural fields, particularly implicit neural shapes (INSs), as the primary representation for geometric objects. Neural fields are compact, continuous, and differentiable representations that offer advantages over traditional discrete methods.
  3. Handling Under-Determined Systems: Recognizing that many geometric problems are under-determined and admit multiple solutions, the paper asserts the necessity for generating diverse sets of solutions. This is facilitated by incorporating a diversity loss to the training process.
  4. Connectedness Constraint: A significant portion of the work is dedicated to formulating a connectedness constraint, which ensures that generated shapes form a single connected component. This is achieved through the application of Morse theory, translating the connectedness property into a differentiable loss.
  5. Experimental Validation: The efficacy of GINNs is validated through experiments on classic problems such as Plateau's problem, as well as realistic 3D engineering problems like the design of jet engine brackets. These experiments highlight the method's ability to generate diverse, high-quality solutions under complex geometric constraints.

Numerical Results and Claims

Several numerical evaluations demonstrate the power and versatility of GINNs:

  • Plateau's Problem: The GINNs successfully produced minimal surfaces constrained by a given boundary, showcasing their capability to handle well-posed geometric problems.
  • Parabolic Mirror Design: For an optical design problem, GINNs were able to identify a surface that directs reflected rays to a focal point, approximating the ideal parabolic shape.
  • Obstacle Course Problem: A GINN trained to connect two interfaces around an obstacle demonstrated the necessity and effectiveness of the connectedness loss. The generative GINN produced multiple viable connection paths, emphasizing the importance of diversity in such tasks.
  • Jet Engine Bracket: In a 3D scenario inspired by a real-world engineering challenge, GINNs generated diverse, viable designs for a jet engine bracket, adhering to a complex set of geometric constraints while maintaining structural integrity.

Theoretical and Practical Implications

The theoretical implications of this research extend into several domains:

  • Generative Design: GINNs present a significant advancement in the field of generative design, particularly in areas where traditional methods are infeasible due to data scarcity.
  • Optimization in High Dimensions: The approach demonstrates potential improvements in optimizing high-dimensional spaces where constraints can be efficiently embedded into the learning process.
  • Constraint-Driven Neural Networks: The integration of geometric constraints into neural network training opens avenues for other types of constraints, such as physical laws and topological properties.

On a practical note, the implementation of GINNs in design processes can streamline the development of innovative solutions, providing engineers and designers with a powerful tool to explore vast solution spaces efficiently.

Future Directions

The results invite several future research directions:

  • Extension to Other Constraints: Expanding the framework to incorporate a broader range of constraints, including those from related domains like physics and topology, could enhance the versatility of GINNs.
  • Improving Computational Efficiency: Optimizing the computation of complex losses, particularly the connectedness loss, through more efficient algorithms or approximations, could significantly reduce training times.
  • Advanced Conditioning Mechanisms: Investigating alternative methods for latent space conditioning in neural fields might improve the structure and quality of generated solutions.
  • Scalability and Robustness: Scaling up the experiments to higher-dimensional and more complex scenarios while ensuring solution robustness will be crucial for real-world applicability.

In summary, this paper establishes the foundation for geometry-informed neural networks, pushing the boundaries of generative modeling driven by constraints rather than data. This represents a promising direction for future research in machine learning and its applications in engineering and design.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 8 posts and received 1714 likes.