Surfels for Geometry: Efficient 3D Representation
- Surfels for Geometry are oriented spatial primitives represented as 2D elliptical Gaussians that approximate and reconstruct complex 3D surfaces.
- They leverage multi-view sampling, blue-noise selection, and curvature-driven densification to ensure high-fidelity rendering and real-time performance.
- Applications include real-time SLAM, dynamic scene reconstruction, and text-to-3D modeling, offering robust geometry and material decoupling.
Surfels, or surface elements, are oriented, spatially extended primitives used to represent and reconstruct complex 3D geometry by discretizing visible surfaces as small elliptical disks or anisotropic Gaussians. Each surfel typically encodes geometric position, orientation, spatial extent, and often appearance or material information. The contemporary generation and deployment of surfels for geometry representation, novel view synthesis, and inverse rendering is based on rigorous mathematical formulations, differentiable splatting, and multi-view optimization. Surfels enable efficient level-of-detail management, high-fidelity surface normal estimation, rapid real-time rendering, and robust geometry–appearance decoupling across static and dynamic, rigid and non-rigid, reflective or matte scenes.
1. Mathematical Definitions and Core Surfel Primitives
The canonical surfel in current literature is an oriented 2D elliptical Gaussian in —a spatial density or indicator for a local tangent plane patch. Its mathematical definition generalizes as follows:
- Center .
- Local Frame encodes tangent axes and normal .
- Anisotropic in-plane scales set the major/ minor axes (disk radii) in tangent space.
- Covariance with negligible thickness .
- Opacity .
- Appearance: View-dependent or harmonics-based color , possibly material/BRDF parameters.
The spatial kernel of surfel is: (Chen et al., 10 Nov 2024, Dai et al., 27 Apr 2024, Jiang et al., 26 Nov 2024, Kouros et al., 25 Apr 2025, Yang et al., 20 Aug 2025, Fan et al., 28 Jul 2025).
This formulation supports both differentiable rasterization and closed-form computation for ray–disk intersection, surface normals, and local curvature (Lee et al., 15 Oct 2024). In the limit of surfels densely tessellating a manifold, the union of planar supports converges to the underlying surface (Kouros et al., 25 Apr 2025, Chen et al., 10 Nov 2024).
2. Surfel Generation, Sampling, and Initialization
Surfel placement is governed by both geometric criteria and computational goals. Key methodologies include:
- Multi-view visible-surface sampling: Surfaces visible from multiple orthographic or perspective directions are sampled, storing position, normal, and material attributes per view (Jähn, 2013).
- Blue-noise and Poisson-disk selection: To avoid clusters and ensure uniformity, surfels are selected and ordered to maximize minimal inter-surfels distances. Progressive blue surfels use dart-throwing and octree-accelerated farthest-point queries, so that any prefix of the surfel array is a well-distributed surface proxy (Jähn, 2013).
- Pixel- or grid-aligned placement: Some schemes initialize surfels in a regular mesh (e.g., grid in RoGS (Feng et al., 23 May 2024), per-pixel in FLAGS (Yu et al., 13 Jun 2024)), leveraging known depth/normal maps or vehicle pose priors for immediate geometry-aligned initialization.
- Curvature- and gradient-driven densification: Surfels are refined or split adaptively based on photometric loss gradients or local curvature, supporting topology changes and detail adaption for dynamic or complex geometry (Chen et al., 10 Nov 2024, Cao et al., 9 Dec 2025).
- Atlas/mesh chart binding: In mesh-driven systems, surfels are attached to parametric chart atlases, inheriting local surface coordinates and deformation, as with MAtCha (Guédon et al., 9 Dec 2024) and SurFhead (Lee et al., 15 Oct 2024).
Surfel initialization is thus highly adaptive—balancing visible surface coverage, spatial uniformity, and data-driven priors (e.g., trajectories, monocular normals, neural predictions).
3. Surfel-based Differentiable Rendering and Geometry Reconstruction
Rendering with surfels involves projecting each disk or Gaussian onto the image plane and performing differentiable “splattings,” incorporating occlusion, alpha-compositing, and material transfer. The general pipeline is:
- Projection and overlap determination: Each surfel is mapped to an ellipse in image space via the local-to-camera Jacobian (Dai et al., 27 Apr 2024, Jiang et al., 26 Nov 2024).
- Alpha-compositing: Colors and opacities are accumulated in front-to-back order. For surfels in a ray’s support:
(Chen et al., 10 Nov 2024, Kouros et al., 25 Apr 2025, Dai et al., 27 Apr 2024).
- Volumetric rendering generalization: For physically-plausible integration over surfaces thickened to volumes or for stochastic geometry fields, closed-form splatting formulas relate Gaussian footprints to density, incorporating self-occlusion and continuous color blending for cluster-robustness (Jiang et al., 26 Nov 2024).
- Normals and depth: Each surfel encodes its normal analytically; depth maps are extracted by precise ray–surfel intersection (quadratic solution in ) (Lee et al., 15 Oct 2024, Dai et al., 27 Apr 2024).
- Differentiable objectives: Optimization targets photometric error (typically / and D-SSIM), normal and depth consistency, opacity regularization, and mask alignment (Dai et al., 27 Apr 2024, Kouros et al., 25 Apr 2025, Yang et al., 20 Aug 2025).
Multi-view and temporal consistency is enforced via curvature/normal alignment across frames and batch refinements, reducing flicker and topological drift (Chen et al., 10 Nov 2024, Dong et al., 8 Oct 2025).
4. Applications and Extensions in Static, Dynamic, and SLAM Scenarios
Surfels are now a foundational primitive across a broad spectrum:
- Real-time geometry and SLAM: In SurfelWarp (Gao et al., 2019) and EGG-Fusion (Pan et al., 1 Dec 2025), surfel sets replace volumetric TSDF for efficient, memory-scaled mapping, supporting nonrigid deformations, explicit covariance-guided fusion, and robust GPU mapping at 24–30 Hz. LAM leverages surfel splatting for geometry-accurate tracking, analytic SE(3) Jacobians, and state-of-the-art mapping accuracy (Fan et al., 28 Jul 2025).
- Mesh-surfels hybridization: MAtCha attaches surfels to chart atlases for explicit surface recovery from sparse RGB, balancing mesh continuity and photorealistic mixing (Guédon et al., 9 Dec 2024).
- Dynamic scene reconstruction: AT-GS (Chen et al., 10 Nov 2024) and DirectGaussian (Dong et al., 8 Oct 2025) extend surfel frameworks with adaptive densification, per-frame fusion, and temporally consistent curvature regularization, capturing dynamic/deforming scenes with emerging and disappearing content.
- Text-to-3D and generative modeling: Surfels enable direct mapping from 2D diffusion priors to explicit geometry, with multi-view normal and curvature constraints stabilizing multi-faceted shapes (Dong et al., 8 Oct 2025).
- Material and relighting models: GOGS (Yang et al., 20 Aug 2025), RGS-DR (Kouros et al., 25 Apr 2025), and (Jiang et al., 23 Sep 2025) use surfels as the core primitive for per-pixel BRDF estimation, global SH-based radiosity, and fast relighting, supporting glossy/specular and indirect illumination scenarios while maintaining precise geometry.
The use of surfels in SLAM and dynamic mapping (e.g., (Pan et al., 1 Dec 2025, Gao et al., 2019)), and their integration with neural texturing or dictionary-based appearance (Nexels (Rong et al., 15 Dec 2025)), demonstrates broad flexibility and application-specific adaption.
5. Advantages, Limitations, and Empirical Performance
Key strengths
- Surface normal fidelity and explicit geometry: The anisotropic, oriented disk parameterization leads to analytical normals per surfel, stable curvature estimates, and mesh-quality surface extraction (e.g., Chamfer distances 0.7–1.0 mm on DTU from (Dai et al., 27 Apr 2024, Guédon et al., 9 Dec 2024)).
- Level-of-detail and real-time rendering: Prefix properties from blue-noise selection (Jähn, 2013), scale-adaptive opacity modulation (Cao et al., 9 Dec 2025), and depth-aware culling enable rendering rates from 30 FPS (scene rendering (Jähn, 2013)) to 4 871 FPS (surfel pass in GES (Ye et al., 24 Apr 2025)).
- Robustness to noise and geometry ambiguity: Information-filter fusion (Pan et al., 1 Dec 2025) and geometric priors (monocular normals, foundation-derived depths (Yang et al., 20 Aug 2025)) suppress noise and local minima, outperforming both point-based splatting and volumetric/TSDF methods in accuracy and visual coherence.
- Material and lighting decoupling: Through SH representation or neural field attachment, surfels can support high-fidelity texturing and relighting (Jiang et al., 23 Sep 2025, Yang et al., 20 Aug 2025, Rong et al., 15 Dec 2025).
Limitations
- Storage scaling in dynamic scenes: Per-frame surfel sets can incur significant memory overhead (Chen et al., 10 Nov 2024).
- Challenges in textureless/specular regions: Reliance on photometric/normal fusion can be degraded under poor illumination or non-Lambertian surfaces; extensions use curvature and higher-order priors (Yang et al., 20 Aug 2025, Pan et al., 1 Dec 2025, Chen et al., 10 Nov 2024).
- Mobile performance dependence: Some instantiations require high-end GPU tensor-core hashing and large on-chip memory for real-time operation (Rong et al., 15 Dec 2025).
- Aliasing in thin structures or at very low surfel counts: Geometric coverage may be insufficient, with neural fields forced to inpaint unmodeled regions (Rong et al., 15 Dec 2025).
Empirical evaluations consistently demonstrate that surfel-based methods surpass classical point- or volumetric-based representations in both geometric and view-synthesis metrics, providing 10–30× memory or compute savings, and enabling real-time or near real-time loop closure and novel-view generation in modern systems (Dai et al., 27 Apr 2024, Ye et al., 24 Apr 2025, Feng et al., 23 May 2024, Pan et al., 1 Dec 2025, Cao et al., 9 Dec 2025).
6. Comparative Analysis and Generalizations
The evolution from "Progressive Blue Surfels" (Jähn, 2013) to modern differentiable, SH-, and neural-field-augmented surfels marks substantial advances:
| Representation | Geometry Type | Appearance Model | LOD/Scale Control | Notable Use Cases |
|---|---|---|---|---|
| Classic Surfels | Disk, normal, color | Vertex color/texture | Blue-noise prefix, octree | Real-time rendering (Jähn, 2013, Gao et al., 2019) |
| Gaussian Surfels | 2D ellipse, cov., SH | SH, neural, BRDF params | Opacity, scale-adaptive | Splatting, dynamic reconstruction, relighting |
| Mesh/Atlas-Attached | Chart-param. disk | Per-chart color, deformation | Atlas LOD, deformation | High-quality mesh hybridization (Guédon et al., 9 Dec 2024) |
| Neural-field augmented | As above, minimal | Global field, per-disk SH | Farthest sampling, adaptive | Texture–geometry decoupling (Rong et al., 15 Dec 2025) |
A core unifying trait is differentiable rasterization via ellipsoidal Gaussian splatting, whether for physics-based inverse rendering, dense mapping, or appearance transfer. Generalizations include 2D vs 3D support (e.g., for thin manifolds), charted/atlas-based mesh bindings, and classical mesh-to-surfel deformation via polar decomposition (Lee et al., 15 Oct 2024).
The use of surfels for geometry underpins state-of-the-art achievements in scalability, fidelity, and adaptability for both static and dynamic three-dimensional vision and graphics tasks.