Papers
Topics
Authors
Recent
2000 character limit reached

3D Progressive Smoothing Schedule

Updated 20 December 2025
  • 3D Progressive Smoothing Schedules are protocols that adjust smoothing strength and scale during optimization to balance noise suppression with fine detail recovery.
  • They are applied in neural SDF reconstruction and LiDAR bundle adjustment to enhance model robustness and improve geometric fidelity.
  • Various scheduling strategies, including linear, quintic, and step decays, are used to optimize convergence and prevent over-smoothing in complex geometric tasks.

Three-dimensional progressive smoothing schedules are training protocols that modulate the strength and spatial scale of smoothing operators throughout the optimization of geometric models, typically in neural implicit representations or SLAM problems. These schedules underlie recent advances in both geometric regularization for neural signed distance functions (SDFs) and robust bundle adjustment for LiDAR-based state estimation. Two notable implementations are (i) time-varying Off-Diagonal Weingarten (ODW) curvature regularization in neural SDF learning for CAD model reconstruction (Yin et al., 5 Nov 2025), and (ii) progressive spatial smoothing via graduated kernel radii in bundle adjustment for large-scale LiDAR mapping (Li et al., 2024). Progressive schedules enable a strong initial smoothing that stabilizes optimization and suppresses noise, followed by a gradual relaxation or reduction that allows recovery of fine geometric structures.

1. Mathematical Formulation of Progressive Smoothing Operators

In neural SDF reconstruction for CAD surfaces, the Off-Diagonal Weingarten (ODW) loss is a second-order curvature constraint that penalizes the off-diagonal entry of the Hessian of the SDF network, formulated as S12(p)=uHf(p)v/f(p)2S_{12}(p) = u^\top H_f(p) v / \|\nabla f(p)\|_2, where (u,v)(u,v) defines any orthonormal basis of the tangent plane at sample point pp. The loss LODW=(1/L)pΩS12(p)L_{\mathrm{ODW}} = (1/L) \sum_{p\in\Omega} |S_{12}(p)| measures the discrepancy between principal curvatures and serves to uniformly flatten and round surface patches (Yin et al., 5 Nov 2025).

For LiDAR bundle adjustment, progressive spatial smoothing (PSS) fits second-order polynomial surfaces to spatial neighborhoods, specifically z=f(x,y)=αi[x2,y2,xy,x,y]z = f(x,y) = \bm\alpha_i^\top [x^2, y^2, x y, x, y]^\top in local tangent frames. A Gaussian kernel w(d)=exp(d2/γ2)w(d) = \exp(-d^2 / \gamma^2) weights neighbor points, controlling the surface fit’s influence radius γ\gamma. This kernel is shrunk iteratively, creating a coarse-to-fine smoothing schedule (Li et al., 2024).

2. Scheduling Strategies and Algorithms

Schedules parameterize the dynamic strength or spatial range of the smoothing operator. In ODW-regularized SDF training, the multiplicative weight λODW(t)\lambda_{\mathrm{ODW}}(t)—with t[0,1]t \in [0,1] indicating normalized training progress—is controlled by interpolation among four keypoints: (0.0,10)(0.0, 10), (0.2,10)(0.2, 10), (0.5,0.001)(0.5, 0.001), (1.0,0.0)(1.0, 0.0). The main interpolation strategies are:

  • Constant: λ(t)=10\lambda(t) = 10 throughout
  • Linear decay: λ(t)\lambda(t) decreases linearly between control points
  • Quintic (fifth-order easing): λ(t)=wi+(wi+1wi)[1(1τ)5]\lambda(t) = w_i + (w_{i+1}-w_i)[1-(1-\tau)^5], with τ\tau as normalized segment time
  • Step: λ(t)\lambda(t) changes abruptly at keypoints
  • Warm-up: inverse of decay, starting low and increasing (Yin et al., 5 Nov 2025)

Pseudocode for loss integration in a standard training loop is provided, defining λODW(t)\lambda_{\mathrm{ODW}}(t) for each iteration and aggregating it into the total loss with other terms (Dirichlet, sign-agnostic, Eikonal).

For PSS-GOSO bundle adjustment, the smoothing kernel radius follows a geometric decay: γt+1=γt/TD\gamma_{t+1} = \gamma_t / T_D, with TD=1.4T_D=1.4, initial γ0=3.0\gamma_0=3.0 m, up to Nmax=5N_{\max}=5 stages. At each stage, scans are voxelized at γ\gamma, kernel points are sampled, and polynomial fits performed. Factors are accumulated into Levenberg-Marquardt normal equations, after which γ\gamma is decremented (Li et al., 2024).

3. Selection of Schedule Parameters

Empirical studies guide schedule parameterization:

  • Neural SDFs: Initial ODW weight w010w_0\approx 10 holds until 20% of training, suppressing large warp; decays to w2103w_2\approx 10^{-3} by 50%, allowing fine-scale surface detail; final zero weight enables unconstrained recovery of acute features (Yin et al., 5 Nov 2025).
  • Bundle adjustment: Initial smoothing γ0=3\gamma_0=3 m ensures robustness to sensor noise and outliers. Five iterations progressively shrink the kernel to γ50.78\gamma_5\approx 0.78 m. Faster decay overfits poor initial alignment; slower decay insufficiently rejects fine-structure outliers (Li et al., 2024).

Heuristics include L0L_0-penalized normal smoothing (parameter μ\mu) to protect edges, voxel-grid sampling for computational tractability, and incremental adjustment of auxiliary penalties (β\beta) in optimization.

4. Empirical Evaluation and Observed Outcomes

On the ABC CAD dataset (Yin et al., 5 Nov 2025), time-varying ODW schedules outperform static baselines:

Schedule Chamfer Distance (×10³) Improvement over baseline
FlatCAD (fixed) 4.37 (±5.48)
Linear decay 3.05 (±2.17) ~30%
Quintic decay 2.86 (±1.22) ~35%
Step 2.87 (±1.34) ~34%
Warm-up (linear) 3.24 (±2.37) — (inferior)

Qualitative analyses: constant weights yield over-smoothed results, suppressing critical transitions; variable decay schedules enable recovery of sharp features and prevent late-stage geometric warping. Step schedules reach competitive accuracy but may introduce transient artifacts.

For LiDAR bundle adjustment, PSS-GOSO demonstrates high robustness and endpoint precision across platforms and environments. The geometric kernel decay ensures broad initial stabilization and accurate detail recovery upon convergence. Too-slow or too-fast kernel decay degrades performance, as previously ablated in PSS-BA studies (Li et al., 2024).

5. Integration in Optimization Pipelines

In neural SDF frameworks, progressive smoothing schedules are integrated directly into the training loop by modulating loss weights. At each iteration, the schedule is queried for the current tt and the corresponding λODW(t)\lambda_{\mathrm{ODW}}(t) applied to the curvature loss.

In LiDAR bundle adjustment, the schedule steers voxel sampling, kernel formation, weight computation, and surface fitting procedures within each outer Levenberg-Marquardt loop. The kernel radius update is greedy: γγ/TD\gamma \leftarrow \gamma / T_D after each stage, halting upon convergence or after NmaxN_{\max} iterations.

6. Practical Recommendations and Implications

For neural SDF CAD reconstruction, a strong-start decay schedule is recommended: early high curvature smoothing prevents highly warped local minima, while progressive reduction unlocks detailed geometry. Quintic interpolation gives smooth transitions and optimal metric tradeoffs; linear is nearly as effective and easier to implement. Step schedules are suitable only for ablation due to possible instability. Adjusting the interval and final weights trades off surface sharpness and stability.

For LiDAR bundle adjustment, a progressive, 5-level geometric reduction is optimal for balancing robust global pose correction and fine-scale point cloud fidelity. Kernel shape, sampling density, and L0L_0-penalization parameters should be chosen considering the scene’s structural complexity and the noise regime.

A plausible implication is that progressive smoothing schedules, whether time-based or spatial-scale-based, furnish a principled protocol for coarse-to-fine optimization in geometric inverse problems, conferring both stability against outliers and maximal detail recovery at convergence (Yin et al., 5 Nov 2025, Li et al., 2024).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to 3D Progressive Smoothing Schedule.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube