FlatCAD: Curvature-Regularized Neural SDFs
- FlatCAD is a curvature-regularized neural SDF learning framework that biases surfaces toward developability by penalizing the mixed Weingarten term.
- It employs finite-difference and autodiff proxies to approximate curvature without full Hessian evaluation, significantly reducing memory and runtime.
- Empirical evaluations on CAD benchmarks show superior Normal Consistency, lower Chamfer Distance, and higher F1 scores compared to traditional methods.
FlatCAD is a curvature-regularized approach to neural signed-distance field (SDF) learning tailored for computer-aided design (CAD) geometry. It introduces an efficient curvature proxy that targets only the off-diagonal (mixed) Weingarten term, allowing scalable enforcement of developable, CAD-style surface behavior in neural SDFs. FlatCAD eliminates the need for full Hessian evaluation and second-order automatic differentiation, reducing both memory footprint and computational costs while maintaining or improving geometric fidelity on engineering-grade shape reconstruction tasks.
1. Mathematical Foundations of Curvature Regularization
FlatCAD operates on the implicit neural surface representation: where is a multilayer perceptron (MLP) realized SDF. The first derivative provides surface normals: satisfying the eikonal constraint near . Curvature information is encoded in the Hessian , whose restriction to the tangent plane forms the shape (Weingarten) operator: with an orthonormal basis for the tangent space at . Principal curvatures are the eigenvalues of , and Gaussian curvature .
FlatCAD's core innovation is to penalize only the off-diagonal (mixed) term, . Under rotation by , becomes: and its squared expectation over random yields
Regularizing biases the surface locally toward developability (), suppressing warp while allowing uniform bending or flattening, as desired in CAD reconstructions.
The full FlatCAD training loss: comprises a on-surface Dirichlet term , an off-surface term (with ), an eikonal term , and the curvature proxy applied on a shell of near-surface points via
Recommended weights are , , , and .
2. Implementation Strategies for the Curvature Proxy
FlatCAD enables two operationally distinct but mathematically equivalent instantiations for the off-diagonal curvature penalty:
2.1. Finite-Difference Proxy
This approach approximates the mixed second-order derivative without explicit Hessian computation. For a shell point and step :
The mixed-difference stencil yields: Thus, the curvature proxy at is
requiring four forward SDF evaluations per proxy point and one gradient, with error vanishing as .
2.2. Autodiff (Hessian-Vector Product) Proxy
This variant leverages vector-Hessian products and reverse-mode autodiff without forming the full Hessian:
- Compute (forward).
- Get (reverse).
- Form .
- Compute (second reverse).
- Contract: .
- Normalize: . The cost is two backward sweeps per point, with no explicit Hessian storage.
Pseudocode
1 2 3 4 5 6 7 8 9 |
for each proxy point x: f0 = f(x) g = grad(f0, x) gv = dot(g, v) hv = grad(gv, x) mixed = dot(u, hv) S12 = mixed / norm(g) L_proxy += abs(S12) L_proxy /= L |
Editor's term: Finite-difference (Proxy-FD) and autodiff (Proxy-AD) proxies exhibit near-identical accuracy.
3. Practical Training Loop and Computational Complexity
A single FlatCAD training iteration comprises:
- Sampling on-surface points for .
- Sampling free-space points for .
- Sampling points for .
- Drawing shell points with tangent frames for .
- Forward pass: at all points; at eikonal and proxy points.
- Loss computation and gradient update.
In traditional full-Hessian Gaussian curvature regularization (as in NeurCADRecon), each sample requires all six independent second derivatives—necessitating six Hessian-vector products per point—which leads to large memory graphs and prohibitive GPU utilization.
By contrast:
- Proxy-AD requires only one Hessian-vector product (two backward sweeps) per proxy point, with memory scaling comparable to standard first-order autodiff.
- Proxy-FD forgoes second-order graphs entirely—using only four forward SDF evaluations and a single gradient per proxy point.
Empirical complexity (NVIDIA H100, $1$MB ABC subset):
| Method | Iter Time (ms) | Conv Time (s) | GPU Mem (GB) |
|---|---|---|---|
| DiGS | 2.99 | 289.8 | 1.61 |
| NSH | 1.84 | 151.8 | 1.79 |
| NeurCADRecon | 5.60 | 455.2 | 6.06 |
| Proxy-AD | 2.54 | 191.8 | 3.46 |
| Proxy-FD | 3.13 | 172.7 | 3.69 |
Both FlatCAD variants roughly halve memory and wall-clock time compared to NeurCADRecon, with minimal impact on convergence behavior.
4. Empirical Evaluation on CAD Datasets
FlatCAD has been validated on the ABC benchmark in two regimes: "1 MB set" (100 random CAD parts, 1 MB) and "5 MB set" (100 hand-selected models, 5 MB). Core metrics are Normal Consistency (NC; higher is better, ), Chamfer Distance (CD; lower is better, ), and F1 score ().
On the 5 MB set:
| Method | NC ↑ | CD ↓ | F1 ↑ |
|---|---|---|---|
| NeurCADRecon | 96.83 | 5.94 | 81.04 |
| Proxy-AD | 97.14 | 5.27 | 86.56 |
| Proxy-FD | 97.38 | 4.93 | 85.86 |
Proxy-FD attains the best NC and CD, while Proxy-AD leads F1; both outperform NeurCADRecon in accuracy and efficiency. Performance persists under data sparsity: FlatCAD remains robust down to 5 K input points; only at 1 K do reconstructions degrade notably. The proxy's developability bias supports plausible hole-filling.
Ablation studies show that the curvature proxy weight in always induces smooth, CAD-like surfaces, with providing the best accuracy-to-smoothness balance. Proxy-AD and Proxy-FD differ in runtime by less than 5 %.
5. Extensions: Scheduling the Weingarten Proxy Weight
FlatCAD's original formulation applied a constant off-diagonal Weingarten (ODW) penalty. Later work demonstrated that dynamically scheduling the ODW weight during training improves both stability and fidelity ["Scheduling the Off-Diagonal Weingarten Loss of Neural SDFs for CAD Models" (Yin et al., 5 Nov 2025)].
Five ODW weight schedules were formalized for with as normalized training progress:
- Constant: (FlatCAD baseline).
- Linear decay: plateau at 10 (), linearly to $0.001$ at , then to 0 at .
- Quintic ("smooth") decay: slow easing from 10 to $0.001$ between and $0.5$, then to 0.
- Step interpolation: 10 for , $0.001$ for , then 0.
- Warm-up (increasing linear): starts at 0, ramps up to 10 at .
On the ABC benchmark (25 models):
| Schedule | NC | CD | F1 | Time (s) |
|---|---|---|---|---|
| FlatCAD (const.) | 96.14 | 4.37 | 84.98 | 877.5 |
| Linear decay | 97.95 | 3.05 | 90.59 | 882.7 |
| Quintic interp. | 98.01 | 2.86 | 92.72 | 878.2 |
| Step | 97.99 | 2.87 | 92.71 | 1003.5 |
Decay schedules yield 30–35% lower Chamfer Distance and improve F1 and NC. Quintic is optimal in stability and detail recovery, while warm-up (reverse schedule) degrades performance. Strong initial regularization stabilizes optimization, suppresses curvature noise, and facilitates eventual detail capture as the penalty decays.
6. Implications for CAD Reconstruction and Engineering Applications
FlatCAD, by decoupling the geometric bias (developability, minimal warp) from computational bottlenecks (full Hessian graphs), enables practical large-scale neural SDF learning for complex CAD surfaces. Its framework-agnostic, drop-in nature supports deployment in existing geometric learning stacks with minimal implementation effort. Robust behavior in data-sparse regimes and superior topology preservation recommend FlatCAD as a default regularizer for neural geometric reconstruction in engineering workflows.
The curvature proxy's parameter-free, purely geometric construction and tunable scheduling argue for its continued relevance as SDFs expand as a modeling primitive across CAD, reverse engineering, and shape optimization contexts. The separation of computational and geometric concerns in FlatCAD suggests the design space of higher-order regularization proxies for neural implicit methods is not yet exhausted.