Geometry-Aware Gaussian Surfel Fusion
- The paper introduces geometry-aware Gaussian surfel fusion, merging anisotropic 2D Gaussian surfels with probabilistic multi-view updates for photorealistic mapping and robust pose tracking.
- It employs a detailed 2D surfel representation with explicit geometric regularization and uncertainty modeling, enabling adaptive rendering and precise multiview fusion.
- The approach outperforms traditional point-based and voxel-based methods, achieving sub-millimeter accuracy and high FPS performance in rigorous experimental benchmarks.
Geometry-aware Gaussian surfel fusion is a class of methodologies that combine anisotropic Gaussian primitives flattened into surfels—planar disk-like patches aligned with local surface geometry—with multi-view, probabilistic, and differentiable fusion rules. This paradigm achieves photorealistic real-time mapping, robust pose tracking, and highly precise surface reconstruction. The current state-of-the-art approaches leverage both RGB-D and LiDAR sensors and employ learnable “2D Gaussian surfels,” adaptive rendering schemes, uncertainty-aware fusion, and explicit geometric regularization to address fundamental limitations of earlier point-based, voxel-based, and 3D Gaussian splatting schemes.
1. Mathematical Representation and Surfel Parameterization
Geometry-aware Gaussian surfel fusion adopts a surfel representation that is embedded within a 2D tangent plane but retains 3D geometric and appearance information. A surfel is characterized by:
- Center position
- Two principal tangent directions
- Scales along these axes
- Normal ; the rotation matrix
- Opacity (alpha)
- Appearance coefficients
The spatial probability distribution is modeled by a 2D Gaussian within the surfel plane, yielding a covariance , where a point on the surfel is , with . The rendering proceeds via front-to-back alpha compositing:
where the color, depth, and normal at pixel are
as compiled in LAM (Fan et al., 28 Jul 2025), GauS-SLAM (Su et al., 3 May 2025), and EGG-Fusion (Pan et al., 1 Dec 2025). Flattening the third axis yields a pure disk surfel (cf. (Dai et al., 2024)), effectively aligning the representation with the local surface.
2. Surfel Fusion, Uncertainty Modeling, and Optimization
Fusion of geometry and appearance evidence from multiple views is accomplished via adaptive, probabilistic update rules operating on all surfel parameters. The core optimization objective typically aggregates photometric (), depth, and normal consistency losses, sometimes with explicit geometric or statistical regularization: Parameters are updated by gradient descent, fusing new RGB-D or LiDAR evidence. Redundant or unobserved surfels are pruned according to alpha coverage and error thresholds.
Uncertainty is handled by per-surfel covariance tracking, e.g., using information filters (Pan et al., 1 Dec 2025) that update the mean and covariance of each surfel's state in information form via: Surfel fusion also extends to pose-graph approaches, where surfel-to-surfel Mahalanobis constraints align patches across keyframes, driving global consistency to sub-pixel levels (Park et al., 31 Jul 2025).
3. Adaptive Surface Rendering and Multi-View Consistency
Adaptive rendering strategies address ambiguous or noisy regions, sharpening edges and increasing multi-view consistency. For example, S³LAM computes a depth-distortion measure: Exceeding a threshold triggers selection of a dominant surfel for color and geometry (Fan et al., 28 Jul 2025, Su et al., 3 May 2025). Edge-aware depth blending, such as surface-aware depth adjustment in GauS-SLAM,
suppresses occluded surfel bias, significantly improving geometry quality under novel viewpoints.
Multi-view fusion is reinforced by geometric regularization, monocular normal priors (from foundation models), and normal-depth consistency losses. Incorporation of strong monocular normal priors corrects ambiguous regions and stabilizes surfel alignment (Dai et al., 2024, Shen et al., 2024, Yang et al., 20 Aug 2025).
4. Advanced Fusion on Lie Groups and Covariance Control
When fusing pose and orientation uncertainties, Gaussian distributions on Lie groups (SE(3), SO(3)) are mapped into a common tangent space, using parallel transport and curvature corrections for optimal covariance adjustment (Ge et al., 2024). For surfel fusion in pose-graph SLAM, covariance transfer between reference frames leverages the Jacobian of the exponential map: Efficient approximations (parallel transport, curvature corrections) realize near-optimal accuracy with low computational overhead—enabling real-time fusion of position and orientation uncertainties for large surfel sets.
Covariance control is further enforced through scale-bounding using sigmoid constraints: $\sigma_\text{bounded} = \sigma_\min + (\sigma_\max-\sigma_\min) \mathrm{sigmoid}(s)$ preventing unconstrained Gaussian growth and yielding compact, crisp representations (Park et al., 31 Jul 2025).
5. Geometry-Aware Fusion in SLAM and Surface Reconstruction
State-of-the-art SLAM systems (S³LAM (Fan et al., 28 Jul 2025), GauS-SLAM (Su et al., 3 May 2025), EGG-Fusion (Pan et al., 1 Dec 2025), GSFusion (Park et al., 31 Jul 2025)) instantiate this fusion pipeline at scale for camera and LiDAR/IMU inputs. Incremental attachment, periodic surfel initialization, local–global map architectures, and fusion-aware bundle adjustment integrate RGB-D and LiDAR evidence into surfel maps. The surfel-centric approach supports sparse-to-dense real-time mapping (24 FPS in EGG-Fusion), robust tracking under severe occlusion, and millimeter-level geometric and pose accuracy.
Comparative results demonstrate that geometry-aware surfel fusion outperforms prior 3D Gaussian Splatting and neural volumetric schemes in surface completeness, normal alignment, tracking robustness, and memory efficiency. Quantitative benchmarks include Replica, ScanNet++, DTU, and Tanks-and-Temples with metrics such as Chamfer distance, normal consistency, PSNR, SSIM, and LPIPS.
6. Extensions: Radiance Field Rendering and Hybrid Architectures
Hybrid bi-scale architectures, such as Gaussian-enhanced Surfels (GES) (Ye et al., 24 Apr 2025), combine opaque 2D surfel layers for coarse geometry with sparse 3D Gaussians for high-frequency appearance. This approach enables sorting-free, ultra-fast rendering (675–1135 FPS) and modular extensions such as anti-aliasing (Mip-GES), storage compaction (Compact-GES), and improved geometry via 2D-GES. Sorting-free blending yields view-consistent images and suppresses “popping” artifacts, while surfel/Gaussian aggregation enables flexible surface smoothing.
Advanced inverse rendering methods further exploit surfel-based representations for material decomposition and photorealistic relighting, using physics-based shading (split-sum approximation), Monte Carlo sampling, and high-frequency specular compensation (Yang et al., 20 Aug 2025).
7. Practical Impact and Experimental Results
Recent systems demonstrate sub-millimeter surface reconstruction accuracy, robust geometric tracking, and real-time end-to-end operation. Tables below summarize key quantitative results from Replica and ScanNet++ (Pan et al., 1 Dec 2025, Fan et al., 28 Jul 2025, Su et al., 3 May 2025):
| Method | Acc (cm) Replica | Comp (cm) ScanNet++ | FPS | PSNR (dB) | Storage (MB) |
|---|---|---|---|---|---|
| EGG-Fusion | 0.60 | 0.91 | 24 | 25.70 | – |
| RTG-SLAM | 0.80 | 1.22 | 15 | 24.77 | – |
| S³LAM | 0.47 | – | 8 | – | – |
| 3DGS | 1.97 (DTU mm) | – | 675 | 27.38 | 734 |
| 2D-GES | 0.79 (DTU mm) | – | – | – | – |
| GES | – | – | 1135 | 27.42 | 185 |
Qualitative results show sharp edge recovery, minimal color/depth artifacts, smooth surface meshes, and persistent tracking under severe occlusions, with surfel-based SLAM and rendering retaining geometric fidelity and visual consistency across difficult scenarios.
References
- LAM: Surfel Splatting SLAM for Geometrically Accurate Tracking and Mapping (Fan et al., 28 Jul 2025)
- GauS-SLAM: Dense RGB-D SLAM with Gaussian Surfels (Su et al., 3 May 2025)
- EGG-Fusion: Efficient 3D Reconstruction with Geometry-aware Gaussian Surfel on the Fly (Pan et al., 1 Dec 2025)
- SolidGS: Consolidating Gaussian Surfel Splatting for Sparse-View Surface Reconstruction (Shen et al., 2024)
- GSurf: 3D Reconstruction via Signed Distance Fields with Direct Gaussian Supervision (Xu et al., 2024)
- High-quality Surface Reconstruction using Gaussian Surfels (Dai et al., 2024)
- Gaussian Set Surface Reconstruction through Per-Gaussian Optimization (Huang et al., 25 Jul 2025)
- GSFusion: Globally Optimized LiDAR-Inertial-Visual Mapping for Gaussian Splatting (Park et al., 31 Jul 2025)
- When Gaussian Meets Surfel: Ultra-fast High-fidelity Radiance Field Rendering (Ye et al., 24 Apr 2025)
- A Geometric Perspective on Fusing Gaussian Distributions on Lie Groups (Ge et al., 2024)
- GOGS: High-Fidelity Geometry and Relighting for Glossy Objects via Gaussian Surfels (Yang et al., 20 Aug 2025)