ContrastCAD: Contrastive Learning-based Representation Learning for Computer-Aided Design Models (2404.01645v1)
Abstract: The success of Transformer-based models has encouraged many researchers to learn CAD models using sequence-based approaches. However, learning CAD models is still a challenge, because they can be represented as complex shapes with long construction sequences. Furthermore, the same CAD model can be expressed using different CAD construction sequences. We propose a novel contrastive learning-based approach, named ContrastCAD, that effectively captures semantic information within the construction sequences of the CAD model. ContrastCAD generates augmented views using dropout techniques without altering the shape of the CAD model. We also propose a new CAD data augmentation method, called a Random Replace and Extrude (RRE) method, to enhance the learning performance of the model when training an imbalanced training CAD dataset. Experimental results show that the proposed RRE augmentation method significantly enhances the learning performance of Transformer-based autoencoders, even for complex CAD models having very long construction sequences. The proposed ContrastCAD model is shown to be robust to permutation changes of construction sequences and performs better representation learning by generating representation spaces where similar CAD models are more closely clustered. Our codes are available at https://github.com/cm8908/ContrastCAD.
- Supervised fitting of geometric primitives to 3d point clouds. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pages 2652–2660, 2019.
- Parsenet: A parametric surface fitting network for 3d point clouds. In Proc. Eur. Conf. Comput. Vis., pages 261–276. Springer, 2020.
- Pie-net: Parametric inference of point cloud edges. In Proc. Adv. Neural Inf. Process. Syst., volume 33, pages 20167–20178, 2020.
- Inferring cad modeling sequences using zone graphs. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pages 6062–6070, 2021.
- Deepcad: A deep generative network for computer-aided design models. In Proc. IEEE/CVF Int. Conf. Comput. Vis., pages 6772–6782, 2021.
- Skexgen: Autoregressive generation of cad construction sequences with disentangled codebooks. In Proc. Int. Conf. Mach. Learn., pages 24698–24724. PMLR, 2022.
- Hierarchical neural coding for controllable cad model generation. In Proc. Int. Conf. Mach. Learn., ICML’23. JMLR.org, 2023.
- Reconstructing editable prismatic cad from rounded voxel models. In Proc. SIGGRAPH Asia, SA ’22. ACM, November 2022.
- Structurenet: hierarchical graph networks for 3d shape generation. ACM Trans. Graph., 38(6), nov 2019.
- Shapeassembly: Learning to generate programs for 3d shape structure synthesis. ACM Trans. Graph., 39(6):Article 234, 2020.
- Uv-net: Learning from boundary representations. arXiv:2006.10211, 2021.
- Brepnet: A topological message passing system for solid models. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pages 12773–12782, 2021.
- Complexgen: Cad reconstruction by b-rep chain complex generation. ACM Trans. Graph., 41(4):1–18, 2022.
- Csgnet: Neural shape parser for constructive solid geometry. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., June 2018.
- Ucsg-net-unsupervised discovering of constructive solid geometry tree. In Proc. Adv. Neural Inf. Process. Syst., volume 33, pages 8776–8786, 2020.
- D^2csg: Unsupervised learning of compact csg trees with dual complements and dropouts. In Proc. Adv. Neural Inf. Process. Syst., volume 36, 2024.
- Sketch2cad: Sequential cad modeling by sketching in context. ACM Trans. Graph., 39(6):1–14, 2020.
- Vitruvion: A generative model of parametric cad sketches. In Proc. Int. Conf. Learn. Represent., 2021.
- Free2cad: Parsing freehand drawings into cad commands. ACM Trans. Graph., 41(4):93:1–93:16, 2022.
- Hao Pan Yuezhi Yang. Discovering design concepts for cad sketches. In Proc. Adv. Neural Inf. Process. Syst., volume 35, 2022.
- Abc: A big cad model dataset for geometric deep learning. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pages 9601–9611, 2019.
- A simple framework for contrastive learning of visual representations. arXiv:2002.05709, 2020.
- Big self-supervised models are strong semi-supervised learners. arXiv:2006.10029, 2020.
- Momentum contrast for unsupervised visual representation learning. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pages 9729–9738, 2020.
- Bootstrap your own latent-a new approach to self-supervised learning. In Proc. Adv. Neural Inf. Process. Syst., volume 33, pages 21271–21284, 2020.
- Exploring simple siamese representation learning. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pages 15750–15758, 2021.
- SimCSE: Simple contrastive learning of sentence embeddings. In Proc. Empirical Methods Natural Lang. Process., 2021.
- DiffCSE: Difference-based contrastive learning for sentence embeddings. In Proc. Annu. Conf. North Amer. Chapter Assoc. Comput. Linguistics, 2022.
- Multicad: Contrastive representation learning for multi-modal 3d computer-aided design models. In Proc. ACM Int. Conf. Inf. Knowl. Manage., pages 1766–1776, 2023.
- Learning representations and generative models for 3d point clouds. In Proc. Int. Conf. Mach. Learn., pages 40–49. PMLR, 2018.
- Wasserstein generative adversarial networks. In Proc. Int. Conf. Mach. Learn., pages 214–223. PMLR, 2017.
- Adam: A method for stochastic optimization. In Proc. Int. Conf. Learn. Represent., San Diego, CA, USA, May 2015.