3D Progressive Smoothing Schedule
- 3D Progressive Smoothing Schedules are protocols that adjust smoothing strength and scale during optimization to balance noise suppression with fine detail recovery.
- They are applied in neural SDF reconstruction and LiDAR bundle adjustment to enhance model robustness and improve geometric fidelity.
- Various scheduling strategies, including linear, quintic, and step decays, are used to optimize convergence and prevent over-smoothing in complex geometric tasks.
Three-dimensional progressive smoothing schedules are training protocols that modulate the strength and spatial scale of smoothing operators throughout the optimization of geometric models, typically in neural implicit representations or SLAM problems. These schedules underlie recent advances in both geometric regularization for neural signed distance functions (SDFs) and robust bundle adjustment for LiDAR-based state estimation. Two notable implementations are (i) time-varying Off-Diagonal Weingarten (ODW) curvature regularization in neural SDF learning for CAD model reconstruction (Yin et al., 5 Nov 2025), and (ii) progressive spatial smoothing via graduated kernel radii in bundle adjustment for large-scale LiDAR mapping (Li et al., 2024). Progressive schedules enable a strong initial smoothing that stabilizes optimization and suppresses noise, followed by a gradual relaxation or reduction that allows recovery of fine geometric structures.
1. Mathematical Formulation of Progressive Smoothing Operators
In neural SDF reconstruction for CAD surfaces, the Off-Diagonal Weingarten (ODW) loss is a second-order curvature constraint that penalizes the off-diagonal entry of the Hessian of the SDF network, formulated as , where defines any orthonormal basis of the tangent plane at sample point . The loss measures the discrepancy between principal curvatures and serves to uniformly flatten and round surface patches (Yin et al., 5 Nov 2025).
For LiDAR bundle adjustment, progressive spatial smoothing (PSS) fits second-order polynomial surfaces to spatial neighborhoods, specifically in local tangent frames. A Gaussian kernel weights neighbor points, controlling the surface fit’s influence radius . This kernel is shrunk iteratively, creating a coarse-to-fine smoothing schedule (Li et al., 2024).
2. Scheduling Strategies and Algorithms
Schedules parameterize the dynamic strength or spatial range of the smoothing operator. In ODW-regularized SDF training, the multiplicative weight —with indicating normalized training progress—is controlled by interpolation among four keypoints: , , , . The main interpolation strategies are:
- Constant: throughout
- Linear decay: decreases linearly between control points
- Quintic (fifth-order easing): , with as normalized segment time
- Step: changes abruptly at keypoints
- Warm-up: inverse of decay, starting low and increasing (Yin et al., 5 Nov 2025)
Pseudocode for loss integration in a standard training loop is provided, defining for each iteration and aggregating it into the total loss with other terms (Dirichlet, sign-agnostic, Eikonal).
For PSS-GOSO bundle adjustment, the smoothing kernel radius follows a geometric decay: , with , initial m, up to stages. At each stage, scans are voxelized at , kernel points are sampled, and polynomial fits performed. Factors are accumulated into Levenberg-Marquardt normal equations, after which is decremented (Li et al., 2024).
3. Selection of Schedule Parameters
Empirical studies guide schedule parameterization:
- Neural SDFs: Initial ODW weight holds until 20% of training, suppressing large warp; decays to by 50%, allowing fine-scale surface detail; final zero weight enables unconstrained recovery of acute features (Yin et al., 5 Nov 2025).
- Bundle adjustment: Initial smoothing m ensures robustness to sensor noise and outliers. Five iterations progressively shrink the kernel to m. Faster decay overfits poor initial alignment; slower decay insufficiently rejects fine-structure outliers (Li et al., 2024).
Heuristics include -penalized normal smoothing (parameter ) to protect edges, voxel-grid sampling for computational tractability, and incremental adjustment of auxiliary penalties () in optimization.
4. Empirical Evaluation and Observed Outcomes
On the ABC CAD dataset (Yin et al., 5 Nov 2025), time-varying ODW schedules outperform static baselines:
| Schedule | Chamfer Distance (×10³) | Improvement over baseline |
|---|---|---|
| FlatCAD (fixed) | 4.37 (±5.48) | — |
| Linear decay | 3.05 (±2.17) | ~30% |
| Quintic decay | 2.86 (±1.22) | ~35% |
| Step | 2.87 (±1.34) | ~34% |
| Warm-up (linear) | 3.24 (±2.37) | — (inferior) |
Qualitative analyses: constant weights yield over-smoothed results, suppressing critical transitions; variable decay schedules enable recovery of sharp features and prevent late-stage geometric warping. Step schedules reach competitive accuracy but may introduce transient artifacts.
For LiDAR bundle adjustment, PSS-GOSO demonstrates high robustness and endpoint precision across platforms and environments. The geometric kernel decay ensures broad initial stabilization and accurate detail recovery upon convergence. Too-slow or too-fast kernel decay degrades performance, as previously ablated in PSS-BA studies (Li et al., 2024).
5. Integration in Optimization Pipelines
In neural SDF frameworks, progressive smoothing schedules are integrated directly into the training loop by modulating loss weights. At each iteration, the schedule is queried for the current and the corresponding applied to the curvature loss.
In LiDAR bundle adjustment, the schedule steers voxel sampling, kernel formation, weight computation, and surface fitting procedures within each outer Levenberg-Marquardt loop. The kernel radius update is greedy: after each stage, halting upon convergence or after iterations.
6. Practical Recommendations and Implications
For neural SDF CAD reconstruction, a strong-start decay schedule is recommended: early high curvature smoothing prevents highly warped local minima, while progressive reduction unlocks detailed geometry. Quintic interpolation gives smooth transitions and optimal metric tradeoffs; linear is nearly as effective and easier to implement. Step schedules are suitable only for ablation due to possible instability. Adjusting the interval and final weights trades off surface sharpness and stability.
For LiDAR bundle adjustment, a progressive, 5-level geometric reduction is optimal for balancing robust global pose correction and fine-scale point cloud fidelity. Kernel shape, sampling density, and -penalization parameters should be chosen considering the scene’s structural complexity and the noise regime.
A plausible implication is that progressive smoothing schedules, whether time-based or spatial-scale-based, furnish a principled protocol for coarse-to-fine optimization in geometric inverse problems, conferring both stability against outliers and maximal detail recovery at convergence (Yin et al., 5 Nov 2025, Li et al., 2024).