Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Geometric Deep Learning for Computer-Aided Design: A Survey (2402.17695v1)

Published 27 Feb 2024 in cs.CG and cs.LG

Abstract: Geometric Deep Learning techniques have become a transformative force in the field of Computer-Aided Design (CAD), and have the potential to revolutionize how designers and engineers approach and enhance the design process. By harnessing the power of machine learning-based methods, CAD designers can optimize their workflows, save time and effort while making better informed decisions, and create designs that are both innovative and practical. The ability to process the CAD designs represented by geometric data and to analyze their encoded features enables the identification of similarities among diverse CAD models, the proposition of alternative designs and enhancements, and even the generation of novel design alternatives. This survey offers a comprehensive overview of learning-based methods in computer-aided design across various categories, including similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds. Additionally, it provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain. The final discussion delves into the challenges prevalent in this field, followed by potential future research directions in this rapidly evolving field.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (118)
  1. M. M. Bronstein, J. Bruna, T. Cohen, and P. Veličković, “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges,” arXiv preprint arXiv:2104.13478, 2021.
  2. E. Kalogerakis, M. Averkiou, S. Maji, and S. Chaudhuri, “3d shape segmentation with projective convolutional networks,” in IEEE conference on computer vision and pattern recognition, 2017.
  3. Y. Feng, Y. Feng, H. You, X. Zhao, and Y. Gao, “Meshnet: Mesh neural network for 3d shape representation,” in AAAI conference on artificial intelligence, 2019.
  4. D. Maturana and S. Scherer, “Voxnet: A 3d convolutional neural network for real-time object recognition,” in IEEE International Conference on Intelligent Robots and Systems (IROS), 2015.
  5. Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in IEEE conference on computer vision and pattern recognition, 2015.
  6. C. Wang, M. Cheng, F. Sohel, M. Bennamoun, and J. Li, “Normalnet: A voxel-based cnn for 3d object classification and retrieval,” Neurocomputing, vol. 323, pp. 139–147, 2019.
  7. T. Le and Y. Duan, “Pointgrid: A deep network for 3d shape understanding,” in IEEE conference on computer vision and pattern recognition, 2018.
  8. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  9. P. K. Jayaraman, A. Sanghi, J. G. Lambourne, K. D. Willis, T. Davies, H. Shayani, and N. Morris, “Uv-net: Learning from boundary representations,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  10. C. Krahe, A. Bräunche, A. Jacob, N. Stricker, and G. Lanza, “Deep learning for automated product design,” CIRP Design Conference, 2020.
  11. C. Krahe, M. Marinov, T. Schmutz, Y. Hermann, M. Bonny, M. May, and G. Lanza, “Ai based geometric similarity search supporting component reuse in engineering design,” CIRP Design Conference, 2022.
  12. D. Machalica and M. Matyjewski, “Cad models clustering with machine learning,” Archive of Mechanical Engineering, vol. 66, no. 2, 2019.
  13. B. T. Jones, M. Hu, M. Kodnongbua, V. G. Kim, and A. Schulz, “Self-supervised representation learning for cad,” in IEEE Conference on Computer Vision and Pattern Recognition, 2023.
  14. J. G. Lambourne, K. D. Willis, P. K. Jayaraman, A. Sanghi, P. Meltzer, and H. Shayani, “Brepnet: A topological message passing system for solid models,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  15. P. K. Jayaraman, J. G. Lambourne, N. Desai, K. D. D. Willis, A. Sanghi, and N. J. W. Morris, “Solidgen: An autoregressive model for direct b-rep synthesis,” Transactions on Machine Learning Research, 2023.
  16. S. Zhou, T. Tang, and B. Zhou, “Cadparser: a learning approach of sequence modeling for b-rep cad,” in International Joint Conference on Artificial Intelligence, 2023.
  17. R. Wu, C. Xiao, and C. Zheng, “Deepcad: A deep generative network for computer-aided design models,” in IEEE International Conference on Computer Vision, 2021.
  18. K. D. Willis, P. K. Jayaraman, H. Chu, Y. Tian, Y. Li, D. Grandi, A. Sanghi, L. Tran, J. G. Lambourne, A. Solar-Lezama et al., “Joinable: Learning bottom-up assembly of parametric cad joints,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  19. B. Jones, D. Hildreth, D. Chen, I. Baran, V. G. Kim, and A. Schulz, “Automate: A dataset and learning approach for automatic mating of cad assemblies,” ACM Transactions on Graphics (TOG), vol. 40, no. 6, pp. 1–18, 2021.
  20. A. Seff, Y. Ovadia, W. Zhou, and R. P. Adams, “Sketchgraphs: A large-scale dataset for modeling relational geometry in computer-aided design,” arXiv preprint arXiv:2007.08506, 2020.
  21. K. D. Willis, P. K. Jayaraman, J. G. Lambourne, H. Chu, and Y. Pu, “Engineering sketch generation for computer-aided design,” in IEEE Conference on Computer Vision and Pattern Recognition, 2021.
  22. C. Li, H. Pan, A. Bousseau, and N. J. Mitra, “Free2cad: Parsing freehand drawings into cad commands,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–16, 2022.
  23. A. Seff, W. Zhou, N. Richardson, and R. P. Adams, “Vitruvion: A generative model of parametric CAD sketches,” in International Conference on Learning Representations, ICLR, 2022.
  24. C. Li, H. Pan, A. Bousseau, and N. J. Mitra, “Sketch2cad: Sequential cad modeling by sketching in context,” ACM Transactions on Graphics (TOG), vol. 39, no. 6, pp. 1–14, 2020.
  25. E. Dupont, K. Cherenkova, A. Kacem, S. A. Ali, I. Arzhannikov, G. Gusev, and D. Aouada, “Cadops-net: Jointly learning cad operation types and steps from boundary-representations,” in International Conference on 3D Vision (3DV).   IEEE, 2022.
  26. W. Para, S. Bhat, P. Guerrero, T. Kelly, N. Mitra, L. J. Guibas, and P. Wonka, “Sketchgen: Generating constrained cad sketches,” Advances in Neural Information Processing Systems, vol. 34, pp. 5077–5088, 2021.
  27. Y. Ganin, S. Bartunov, Y. Li, E. Keller, and S. Saliceti, “Computer-aided design as language,” Advances in Neural Information Processing Systems, vol. 34, pp. 5885–5897, 2021.
  28. F. Hähnlein, C. Li, N. J. Mitra, and A. Bousseau, “Cad2sketch: Generating concept sketches from cad sequences,” ACM Transactions on Graphics (TOG), vol. 41, no. 6, pp. 1–18, 2022.
  29. X. Xu, W. Peng, C.-Y. Cheng, K. D. Willis, and D. Ritchie, “Inferring cad modeling sequences using zone graphs,” in IEEE conference on computer vision and pattern recognition, 2021.
  30. K. D. Willis, Y. Pu, J. Luo, H. Chu, T. Du, J. G. Lambourne, A. Solar-Lezama, and W. Matusik, “Fusion 360 gallery: A dataset and environment for programmatic cad construction from human design sequences,” ACM Transactions on Graphics (TOG), vol. 40, no. 4, pp. 1–24, 2021.
  31. X. Xu, K. D. D. Willis, J. G. Lambourne, C. Cheng, P. K. Jayaraman, and Y. Furukawa, “Skexgen: Autoregressive generation of CAD construction sequences with disentangled codebooks,” in International Conference on Machine Learning, 2022.
  32. M. A. Uy, Y.-Y. Chang, M. Sung, P. Goel, J. G. Lambourne, T. Birdal, and L. J. Guibas, “Point2cyl: Reverse engineering 3d objects from point clouds to extrusion cylinders,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  33. J. G. Lambourne, K. Willis, P. K. Jayaraman, L. Zhang, A. Sanghi, and K. R. Malekshan, “Reconstructing editable prismatic cad from rounded voxel models,” in SIGGRAPH Asia 2022 Conference Papers, 2022.
  34. H. Guo, S. Liu, H. Pan, Y. Liu, X. Tong, and B. Guo, “Complexgen: Cad reconstruction by b-rep chain complex generation,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–18, 2022.
  35. Autodesk, “Autocad,” 1982. [Online]. Available: https://www.autodesk.com/products/autocad/overview?term=1-YEAR&tab=subscription
  36. ——, “Fusion 360,” 2013. [Online]. Available: https://www.autodesk.com/products/fusion-360/overview?term=1-YEAR&tab=subscription
  37. Dassault Systèmes, “Solidworks,” 1995. [Online]. Available: https://www.3ds.com/products/solidworks
  38. ——, “Catia,” 1977. [Online]. Available: https://www.3ds.com/products-services/catia/
  39. PTC, “Onshape,” 2015. [Online]. Available: https://www.onshape.com/en/
  40. ——, “Creo,” 2011. [Online]. Available: https://www.ptc.com/en/products/creo
  41. Kai Backman, Mikko Mononen, “Tinkercad,” 2011. [Online]. Available: https://www.tinkercad.com
  42. Thomas Paviot, “pythonocc-core,” 2022. [Online]. Available: https://doi.org/10.5281/zenodo.3605364
  43. “Opencascade technology (occt),” 2002. [Online]. Available: https://github.com/AutodeskAILab/occwl
  44. Pradeep Kumar Jayaraman, Joseph Lambourne, “Occwl,” 2023. [Online]. Available: https://dev.opencascade.org
  45. “Cadquery,” 2019. [Online]. Available: https://github.com/CadQuery/cadquery/tree/master
  46. “Parasolid,” 1980. [Online]. Available: https://www.plm.automation.siemens.com/global/en/products/plm-components/parasolid.html
  47. F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.
  48. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
  49. N. Heidari and A. Iosifidis, “Temporal attention-augmented graph convolutional network for efficient skeleton-based human action recognition,” in International Conference on Pattern Recognition (ICPR).   IEEE, 2021.
  50. ——, “Progressive spatio-temporal graph convolutional network for skeleton-based human action recognition,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021.
  51. L. Hedegaard, N. Heidari, and A. Iosifidis, “Continual spatio-temporal graph convolutional networks,” Pattern Recognition, vol. 140, p. 109528, 2023.
  52. E. Mansimov, O. Mahmood, S. Kang, and K. Cho, “Molecular geometry prediction using a deep generative graph neural network,” Scientific reports, vol. 9, no. 1, p. 20381, 2019.
  53. Y. Wang, J. Wang, Z. Cao, and A. Barati Farimani, “Molecular contrastive learning of representations via graph neural networks,” Nature Machine Intelligence, vol. 4, no. 3, pp. 279–287, 2022.
  54. K. Baltakys, M. Baltakienė, N. Heidari, A. Iosifidis, and J. Kanniainen, “Predicting the trading behavior of socially connected investors: Graph neural network approach with implications to market surveillance,” Expert Systems with Applications, vol. 228, p. 120285, 2023.
  55. L. Yang, J. Zhuang, H. Fu, X. Wei, K. Zhou, and Y. Zheng, “Sketchgnn: Semantic sketch segmentation with graph neural networks,” ACM Transactions on Graphics (TOG), vol. 40, no. 3, pp. 1–13, 2021.
  56. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations, 2017.
  57. N. Heidari and A. Iosifidis, “Progressive graph convolutional networks for semi-supervised node classification,” IEEE Access, vol. 9, pp. 81 957–81 968, 2021.
  58. N. Heidari, L. Hedegaard, and A. Iosifidis, “Graph convolutional networks,” in Deep Learning for Robot Perception and Cognition.   Elsevier, 2022, pp. 71–99.
  59. Y. Li, O. Vinyals, C. Dyer, R. Pascanu, and P. Battaglia, “Learning deep generative models of graphs,” arXiv preprint arXiv:1803.03324, 2018.
  60. P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser, “The princeton shape benchmark,” in Proceedings Shape Modeling Applications, 2004.   IEEE, 2004, pp. 167–178.
  61. A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
  62. Q. Zhou and A. Jacobson, “Thingi10k: A dataset of 10,000 3d-printing models,” arXiv preprint arXiv:1605.04797, 2016.
  63. K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su, “Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3d object understanding,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  64. F. Bogo, J. Romero, G. Pons-Moll, and M. J. Black, “Dynamic faust: Registering human bodies in motion,” in IEEE conference on computer vision and pattern recognition, 2017, pp. 6233–6242.
  65. M. Aubry, D. Maturana, A. A. Efros, B. C. Russell, and J. Sivic, “Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models,” in IEEE conference on computer vision and pattern recognition, 2014, pp. 3762–3769.
  66. S. Song, S. P. Lichtenberg, and J. Xiao, “Sun rgb-d: A rgb-d scene understanding benchmark suite,” in IEEE conference on computer vision and pattern recognition, 2015.
  67. A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner, “Scannet: Richly-annotated 3d reconstructions of indoor scenes,” in IEEE conference on computer vision and pattern recognition, 2017.
  68. S. Jayanti, Y. Kalyanaraman, N. Iyer, and K. Ramani, “Developing an engineering shape benchmark for cad models,” Computer-Aided Design, vol. 38, no. 9, pp. 939–953, 2006.
  69. S. Kim, H.-g. Chi, X. Hu, Q. Huang, and K. Ramani, “A large-scale annotated mechanical components benchmark for classification and retrieval tasks with deep neural networks,” in European Conference on Computer Vision.   Springer, 2020.
  70. D. Bespalov, C. Y. Ip, W. C. Regli, and J. Shaffer, “Benchmarking cad search techniques,” in ACM symposium on Solid and physical modeling, 2005.
  71. Z. Zhang, P. Jaiswal, and R. Rai, “Featurenet: Machining feature recognition based on 3d convolution neural network,” Computer-Aided Design, vol. 101, pp. 12–22, 2018.
  72. B. Manda, P. Bhaskare, and R. Muthuganapathy, “A convolutional neural network approach to the classification of engineering models,” IEEE Access, vol. 9, pp. 22 711–22 723, 2021.
  73. A. Angrish, B. Craver, and B. Starly, ““fabsearch”: A 3d cad model-based search engine for sourcing manufacturing services,” Journal of Computing and Information Science in Engineering, vol. 19, no. 4, p. 041006, 2019.
  74. L. Mandelli and S. Berretti, “Cad 3d model classification by graph neural networks: A new approach based on step format,” arXiv preprint arXiv:2210.16815, 2022.
  75. W. Cao, T. Robinson, Y. Hua, F. Boussuge, A. R. Colligan, and W. Pan, “Graph representation of 3d cad models for machining feature recognition with deep learning,” in International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 2020.
  76. S. Koch, A. Matveev, Z. Jiang, F. Williams, A. Artemov, E. Burnaev, M. Alexa, D. Zorin, and D. Panozzo, “Abc: A big cad model dataset for geometric deep learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  77. M. Eitz, J. Hays, and M. Alexa, “How do humans sketch objects?” ACM Transactions on graphics (TOG), vol. 31, no. 4, pp. 1–10, 2012.
  78. P. Sangkloy, N. Burnell, C. Ham, and J. Hays, “The sketchy database: learning to retrieve badly drawn bunnies,” ACM Transactions on Graphics (TOG), vol. 35, no. 4, pp. 1–12, 2016.
  79. Y. Gryaditskaya, M. Sypesteyn, J. W. Hoftijzer, S. C. Pont, F. Durand, and A. Bousseau, “Opensketch: a richly-annotated dataset of product design sketches.” ACM Transactions on Graphics (TOG), vol. 38, no. 6, pp. 232–1, 2019.
  80. A. R. Colligan, T. T. Robinson, D. C. Nolan, Y. Hua, and W. Cao, “Hierarchical cadnet: Learning from b-reps for machining feature recognition,” Computer-Aided Design, vol. 147, p. 103226, 2022.
  81. T. G. Gunn, “The mechanization of design and manufacturing,” Scientific American, vol. 247, no. 3, pp. 114–131, 1982.
  82. Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in neural information processing systems, vol. 33, pp. 5812–5823, 2020.
  83. M. Al-Wswasi, A. Ivanov, and H. Makatsoris, “A survey on smart automated computer-aided process planning (acapp) techniques,” The International Journal of Advanced Manufacturing Technology, vol. 97, pp. 809–832, 2018.
  84. Y. Shi, Y. Zhang, K. Xia, and R. Harik, “A critical review of feature recognition techniques,” Computer-Aided Design and Applications, vol. 17, no. 5, pp. 861–899, 2020.
  85. G. Zhan, Q. Fan, K. Mo, L. Shao, B. Chen, L. J. Guibas, H. Dong et al., “Generative 3d part assembly via dynamic graph learning,” Advances in Neural Information Processing Systems, vol. 33, pp. 6315–6326, 2020.
  86. H. Lin, M. Averkiou, E. Kalogerakis, B. Kovacs, S. Ranade, V. Kim, S. Chaudhuri, and K. Bala, “Learning material-aware local descriptors for 3d shapes,” in International Conference on 3D Vision (3DV).   IEEE, 2018.
  87. K. Mo, P. Guerrero, L. Yi, H. Su, P. Wonka, N. Mitra, and L. J. Guibas, “Structurenet: hierarchical graph networks for 3d shape generation,” ACM Transactions on Graphics (TOG), vol. 38, no. 6, pp. 242:1–242:19, 2019.
  88. R. K. Jones, T. Barton, X. Xu, K. Wang, E. Jiang, P. Guerrero, N. J. Mitra, and D. Ritchie, “Shapeassembly: Learning to generate programs for 3d shape structure synthesis,” ACM Transactions on Graphics (TOG), vol. 39, no. 6, pp. 1–20, 2020.
  89. Z. Wu, X. Wang, D. Lin, D. Lischinski, D. Cohen-Or, and H. Huang, “Sagnet: Structure-aware generative network for 3d-shape modeling,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–14, 2019.
  90. K. Yin, Z. Chen, S. Chaudhuri, M. Fisher, V. G. Kim, and H. Zhang, “Coalesce: Component assembly by learning to synthesize connections,” in International Conference on 3D Vision (3DV).   IEEE, 2020.
  91. C. Zhu, K. Xu, S. Chaudhuri, R. Yi, and H. Zhang, “Scores: Shape composition with recursive substructure priors,” ACM Transactions on Graphics (TOG), vol. 37, no. 6, pp. 1–14, 2018.
  92. A. N. Harish, R. Nagar, and S. Raman, “Rgl-net: A recurrent graph learning framework for progressive part assembly,” in IEEE Winter Conference on Applications of Computer Vision (WACV), 2022.
  93. Y. Lee, E. S. Hu, and J. J. Lim, “Ikea furniture assembly environment for long-horizon complex manipulation tasks,” in IEEE International Conference on Robotics and Automation (icra).   IEEE, 2021.
  94. M. Sung, H. Su, V. G. Kim, S. Chaudhuri, and L. Guibas, “Complementme: Weakly-supervised component suggestions for 3d modeling,” ACM Transactions on Graphics (TOG), vol. 36, no. 6, pp. 1–12, 2017.
  95. X. Wang, B. Zhou, Y. Shi, X. Chen, Q. Zhao, and K. Xu, “Shape2motion: Joint analysis of motion parts and attributes from 3d shapes,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  96. A. Zhao, J. Xu, M. Konaković-Luković, J. Hughes, A. Spielberg, D. Rus, and W. Matusik, “Robogrammar: graph grammar for terrain-optimized robot design,” ACM Transactions on Graphics (TOG), vol. 39, no. 6, pp. 1–16, 2020.
  97. F. Boussuge, C. M. Tierney, T. T. Robinson, and C. G. Armstrong, “Application of tensor factorisation to analyze similarities in cad assembly models,” in International Meshing Roundtable and User Forum, vol. 1, 2019.
  98. Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, “Dynamic graph cnn for learning on point clouds,” ACM Transactions on Graphics (tog), vol. 38, no. 5, pp. 1–12, 2019.
  99. P. Guerrero, Y. Kleiman, M. Ovsjanikov, and N. J. Mitra, “Pcpnet learning local shape properties from raw point clouds,” in Computer graphics forum, vol. 37, no. 2.   Wiley Online Library, 2018, pp. 75–85.
  100. S. Brody, U. Alon, and E. Yahav, “How attentive are graph attention networks?” in International Conference on Learning Representations, ICLR, 2022.
  101. T. Funkhouser, M. Kazhdan, P. Shilane, P. Min, W. Kiefer, A. Tal, S. Rusinkiewicz, and D. Dobkin, “Modeling by example,” ACM Transactions on Graphics (TOG), vol. 23, no. 3, pp. 652–663, 2004.
  102. D. Ha and D. Eck, “A neural representation of sketch drawings,” in International Conference on Learning Representations, ICLR, 2018.
  103. J. Jongejan, H. Rowley, T. Kawashima, J. Kim, and N. Fox-Gieg, “The quick, draw!-ai experiment,” Mount View, CA, accessed Feb, vol. 17, no. 2018, p. 4, 2016.
  104. M. Zhang and Y. Chen, “Link prediction based on graph neural networks,” Advances in neural information processing systems, vol. 31, 2018.
  105. C. Nash, Y. Ganin, S. A. Eslami, and P. Battaglia, “Polygen: An autoregressive generative model of 3d meshes,” in International Conference on Machine Learning.   PMLR, 2020.
  106. O. Vinyals, M. Fortunato, and N. Jaitly, “Pointer networks,” Advances in neural information processing systems, vol. 28, 2015.
  107. K. Varda, “Protocol buffers: Google’s data interchange format,” Google Open Source Blog, Available at least as early as Jul, vol. 72, p. 23, 2008.
  108. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, ICLR, 2021.
  109. D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” Advances in Neural Information Processing Systems, vol. 28, 2015.
  110. J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, “Neural message passing for quantum chemistry,” in International Conference on Machine Learning.   PMLR, 2017.
  111. P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas, “Learning representations and generative models for 3d point clouds,” in International conference on machine learning.   PMLR, 2018.
  112. K. Wang, J. Zheng, and Z. Zhou, “Neural face identification in a 2d wireframe projection of a manifold object,” in IEEE Conference on Computer Vision and Pattern Recognition, 2022.
  113. P. Benkő and T. Várady, “Segmentation methods for smooth point regions of conventional engineering objects,” Computer-Aided Design, vol. 36, no. 6, pp. 511–523, 2004.
  114. T. Birdal, B. Busam, N. Navab, S. Ilic, and P. Sturm, “Generic primitive detection in point clouds using novel minimal quadric fits,” IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 6, pp. 1333–1347, 2019.
  115. L. Li, M. Sung, A. Dubrovina, L. Yi, and L. J. Guibas, “Supervised fitting of geometric primitives to 3d point clouds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  116. C. Sommer, Y. Sun, E. Bylow, and D. Cremers, “Primitect: Fast continuous hough voting for primitive detection,” in International Conference on Robotics and Automation (ICRA).   IEEE, 2020.
  117. J. A. Bærentzen, “Robust generation of signed distance fields from triangle meshes,” in International Workshop on Volume Graphics.   IEEE, 2005.
  118. M. Sanchez, O. Fryazinov, and A. Pasko, “Efficient evaluation of continuous signed distance to a polygonal mesh,” in Spring Conference on Computer Graphics, 2012.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Negar Heidari (8 papers)
  2. Alexandros Iosifidis (153 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.