CADMorph: Geometry in Imaging & CAD Editing
- CADMorph is a dual-framework system that applies geometric reasoning for precise lung nodule analysis and semantically valid CAD model editing.
- In medical imaging, CADMorph extracts curvature-based 3D-morphomics from CT-derived meshes using XGBoost for effective lung nodule classification with high AUC scores.
- In CAD design, CADMorph orchestrates a plan–generate–verify loop with latent diffusion and large language models to iteratively refine parametric models while preserving construction semantics.
CADMorph refers to two distinct frameworks at the intersection of computational geometry, computer-aided design (CAD), and medical image analysis, each leveraging geometric priors for downstream tasks in their domains. The first, from medical imaging, operationalizes the extraction and quantification of morphological phenotypes for robust malignancy prediction of lung nodules. The second, in CAD editing, introduces a geometry-driven, plan–generate–verify methodology that orchestrates pretrained diffusion and LLMs for parametric model editing while preserving construction semantics. Both approaches demonstrate the utility of geometric reasoning—via mesh morphometrics or deep latent embeddings—for tasks under constraints of data scarcity and high semantic structure preservation.
1. Definition and Scope
CADMorph in medical imaging is a pipeline for extracting and utilizing 3D-morphomics—quantitative shape features such as local curvature distributions and global mesh energy—from CT-derived meshes for pathology prediction, exemplified by the diagnosis of lung nodule malignancy (Munoz et al., 2022). In the CAD design domain, CADMorph denotes a system for geometry-driven parametric CAD editing: inferring minimal, semantically valid edits to a parametric construction sequence so that its rendered shape conforms tightly to a given geometric target, under the constraint of scarce data triplets tying parameter sequences to geometric shapes (Ma et al., 12 Dec 2025).
2. Medical Imaging CADMorph: 3D-Morphomics Workflow
CADMorph for medical morphometrics systematically captures geometric surface features that encode pathological deformations. The workflow consists of:
- Preprocessing: CT volumes and binary masks are resampled to isotropic voxels. Cubic patches centered on nodules (e.g., 64³ voxels) are extracted.
- Mesh Extraction and Cleanup: The Lewiner variant of Marching Cubes generates topologically correct meshes. Short edges are collapsed, long edges split, and degenerate faces are removed; optional smoothing is assessed but not utilized due to negligible impact on performance.
- Feature Computation: Mean and Gaussian curvatures at mesh vertices are computed and aggregated into histograms (e.g., ten bins for mean curvature), normalized by surface area. The absolute Gaussian curvature, integrated over the surface, forms the mesh energy metric. This yields an 11-dimensional 3D-morphomics descriptor per instance.
- Modeling: An XGBoost classifier with Bayesian hyperparameter tuning and imbalance correction is trained on 3D-morphomics features, with validation on public datasets such as NLST and LIDC (Munoz et al., 2022).
This approach directly quantifies protrusions such as spiculations and invaginations, yielding features that are robust to intensity artefacts and complementary to classical intensity- and texture-based radiomics.
3. CAD Domain CADMorph: Plan–Generate–Verify Editing Loop
CADMorph for CAD model editing addresses the challenge of synchronizing geometric and parametric representations during shape modification:
- Problem Formulation: Each CAD model is specified as a tuple (C, S), where C is a sequence of parametric primitives, and S is a shape representation, typically as a truncated signed distance field (SDF) on a voxel grid. The goal is to find an updated sequence C′ such that its rendering F(C′) best matches a provided target shape S′, subject to a structure-preserving regularization R_struct(C′, C).
- Framework: The system iteratively applies:
- Planning: Cross-attention maps from a parameter-to-shape (P2S) latent diffusion model identify sequence segments likely responsible for shape discrepancy; these are masked for editing.
- Generation: A masked-parameter-prediction (MPP) LLM infills masked regions to generate multiple candidate sequences.
- Verification: Each candidate is embedded in the P2S shape-latent space and compared against the target; the lowest-distance candidate is retained.
- Key Models: P2S is a 3D U-Net-based latent diffusion model trained on voxelized SDFs, while the MPP model is a LoRA-finetuned Llama-3-8B transformer with hierarchical masking and autoregressive prediction.
- Optimization Objective:
where measures shape discrepancy (e.g., Chamfer distance) and penalizes divergence from the original sequence (Ma et al., 12 Dec 2025).
4. Performance and Empirical Validation
Medical Imaging (Lung Nodule Classification)
Sumarizing key experimental metrics from (Munoz et al., 2022):
| Model | Dataset | AUC | Sensitivity | Specificity | Accuracy |
|---|---|---|---|---|---|
| 3D-Morphomics | NLST | 0.964 | 90.7 % | 91.1 % | 0.94 |
| Radiomics (111 features) | NLST | 0.976 | 92.7 % | 93.6 % | 0.94 |
| Clinical features | NLST | 0.580 | 64.2 % | 52.0 % | 0.61 |
| Brock logistic model | NLST | 0.826 | 72.0 % | 82.0 % | 0.81 |
| 3D-Morphomics + Radiomics | NLST | 0.978 | 92.7 % | 94.7 % | 0.95 |
| 3D-Morphomics | LIDC | 0.906 | 84.0 % | 85.8 % | 0.84 |
| 3D-Morphomics + Radiomics | LIDC | 0.958 | 91.7 % | 87.1 % | 0.90 |
Combining 3D-morphomics with radiomics provides state-of-the-art AUC, with curvature-based features among the top contributors.
CAD Editing
Benchmarks from (Ma et al., 12 Dec 2025):
| Method | IoU | mean CD | EditDist |
|---|---|---|---|
| GPT-4o | 0.247 | 0.107 | 21.12 |
| CAD-Diffuser | 0.548 | 0.097 | 17.29 |
| FlexCAD | 0.447 | 0.029 | 22.29 |
| CADMorph | 0.687 | 0.009 | 16.87 |
Ablations confirm that either removing the planning (mask selection via cross-attention) or verification (latent-space shape comparison) stages significantly impairs performance (IoU drops to 0.45 or below).
5. Integration, Generalization, and Applications
In medical CADMorph, fusion consists of concatenating the 11-dimensional morphomics vector with 111 radiomics features for XGBoost-based joint modeling. Six of the top 30 features in the fusion model are curvature distribution bins, supporting their complementary diagnostic value. The framework generalizes to any binary mask representing an anatomical structure and is applied beyond lung nodules, such as in liver fibrosis staging via surface nodularity.
In CAD design, CADMorph supports iterative editing—using the updated sequence as input for further targeted shape adjustment—as well as reverse-engineering enhancement, where its loop refines construction sequences output by geometry-only methods to achieve higher structural similarity to plausible parametric histories. Noted failure modes include highly complex shape alterations that cannot be attributed to any primitive in the original sequence, leading to planning-stage bottlenecks, and occasional invalid LLM infills.
6. Limitations and Computational Considerations
In medical settings, CADMorph achieves efficient inference (≈1.2 s per nodule on CPU) while providing high interpretability and robustness. The pipeline avoids dependence on intensity statistics and demonstrates strong out-of-distribution generalization.
In CAD design applications, computational latency is higher (∼7 minutes per model on 8×A100 GPUs) due to the iterative inference and large model sizes. Performance on public benchmarks is predominantly demonstrated on relatively simple models, indicating a need for more challenging datasets to further validate generality. Additional directions include accelerating test-time scaling and leveraging generated triplets for end-to-end modeling with reduced inference steps.
7. Significance and Prospects
Both CADMorph frameworks establish geometry-centric feature extraction and edit planning as effective strategies for robust, interpretable, and high-fidelity downstream tasks in domains requiring strong structure preservation and resilience to heterogeneity in appearance. In medical imaging, this yields improved malignancy prediction and diagnostic reliability. In generative CAD, it provides a data-efficient route to semantically valid geometry editing, scalable to iterative and reverse-engineered workflows. Subsequent developments may merge these principles with broader end-to-end learning, further elevating the geometric prior in representation and model-driven inference (Munoz et al., 2022, Ma et al., 12 Dec 2025).