Papers
Topics
Authors
Recent
Search
2000 character limit reached

Gaussian Splatting with Discretized SDF

Updated 10 February 2026
  • The method integrates explicit 3D Gaussian representation with discretized SDFs to achieve accurate and robust surface geometry reconstruction.
  • It employs bidirectional coupling with supervision via depth, normal consistency, and projection constraints to align explicit Gaussians with implicit SDFs.
  • The approach enables efficient relightable asset generation, fine mesh extraction, and real-time rendering while mitigating artifacts common in pure Gaussian splatting.

Gaussian splatting with discretized signed distance fields (SDF) combines high-performance explicit 3D Gaussian representations with grid- or point-sampled SDFs to achieve accurate surface geometry, robust relightable asset generation, and efficient real-time rendering. This paradigm overcomes the limitations of pure 3D Gaussian splatting—such as poor geometry regularization and susceptibility to outlier “floaters”—while also mitigating the high computational cost and smoothing artifacts of classical SDF-based volumetric rendering. By tightly coupling a discretized SDF to the Gaussian primitives, modern methods achieve bidirectional supervision, surface-constrained optimization, and scalable mesh or appearance extraction.

1. Discretized SDF Formulations

Discretized SDFs underpin this synergy between explicit and implicit geometry. Multiple parameterization strategies are used:

  • Factorized Grid + MLP (Tensor-style): SDFs are represented on a high-resolution 3D grid (e.g., 4003400^3), where a learnable latent field is factorized along axes and composed with small MLPs for local interpolation (Zhu et al., 2024). The grid can be factored as:

V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]

followed by s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p) to yield the SDF at pp.

  • Trilinear Grid: Pure voxel grids store sgs_g at each lattice center; SDF queries are trilinearly interpolated, commonly at resolutions such as N3=2563N^3=256^3 or 4003400^3 (Gao et al., 21 Jul 2025Zhu et al., 21 Jul 2025).
  • Hash Grid + MLP: For better scalability, SDF fields can be implemented with multi-resolution hash grids and compact MLPs, supporting fine geometry and scalable supervision (Lyu et al., 2024Yu et al., 2024).
  • Octree Discretization: To economize memory and provide spatial adaptivity, SDFs are encoded at octree leaf corners with trilinear interpolation for arbitrary queries (Li et al., 2024).

Discretization enables both efficient SDF queries during Gaussian optimization and direct extraction of meshes using Marching Cubes after training converges.

2. Bidirectional Coupling: Mutual Supervision and Losses

The tight integration between Gaussians and discretized SDFs relies on supervision flows in both directions:

  • SDF \to GS:

    • Depth and Normal Consistency: Rendered depth and normals from both the SDF and the Gaussian splats are forced to match along camera rays. The loss formulation typically includes symmetrical depth and normal agreements:

    Ldepth=Er[Dsdf(r)stopgrad(Dgs(r))+Dgs(r)stopgrad(Dsdf(r))]L_\text{depth} = \mathbb{E}_r\left[ |D_\text{sdf}(r) - \text{stopgrad}(D_\text{gs}(r))| + |D_\text{gs}(r) - \text{stopgrad}(D_\text{sdf}(r))| \right]

    Lnormal=Er[(1nsdf,stopgrad(ngs))+(1ngs,stopgrad(nsdf))]L_\text{normal} = \mathbb{E}_r\left[ (1-\langle n_\text{sdf}, \text{stopgrad}(n_\text{gs})\rangle ) + (1-\langle n_\text{gs}, \text{stopgrad}(n_\text{sdf})\rangle ) \right]

    (Zhu et al., 2024Lyu et al., 2024) - Zero-Level Set Attraction: Each Gaussian center V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]0 is penalized for deviation from the SDF zero-level set: V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]1. - Covariance-Normal Alignment: The Gaussian with smallest variance direction V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]2 is aligned with the SDF surface normal V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]3:

    V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]4

    (Zhu et al., 2024).

  • GS V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]5 SDF:

    • Normal Guidance: The smoother per-pixel normals V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]6 (from splatted Gaussians) are used to regularize the SDF, e.g. via:

    V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]7

    where V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]8 is the SDF hit point for ray V(p)=k=1r[vkX(px)MkYZ(py,pz)vkY(py)MkXZ(px,pz)vkZ(pz)MkXY(px,py)]V(p) = \sum_{k=1}^r \left[ v_k^X(p_x) \circ M_k^{YZ}(p_y,p_z) \oplus v_k^Y(p_y) \circ M_k^{XZ}(p_x,p_z) \oplus v_k^Z(p_z) \circ M_k^{XY}(p_x,p_y) \right]9 (Zhu et al., 2024). - Joint Photometric Losses: Both representations contribute to RGB rendering or relighting losses with compositional objectives. - Ray Sampling Localization: Gaussian depth is used to restrict SDF sampling to near-surface intervals, focusing computational resources and supervision (Yu et al., 2024Tourani et al., 15 Oct 2025).

This bidirectional supervision ensures that explicit and implicit structures co-adapt, yielding high-fidelity geometry alongside photorealistic rendering.

3. SDF-Guided Gaussian Optimization, Densification, and Pruning

Discretized SDF fields are used to regularize and dynamically adapt the Gaussian distribution in several ways:

  • Outlier/Floater Pruning: Gaussians distant from the SDF zero-level set (e.g., s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)0 where s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)1 is an adaptive threshold) are culled to avoid floating, non-surface primitives (Zhu et al., 2024Zhu et al., 21 Jul 2025Yu et al., 2024).
  • Densification: Grid cells with s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)2 (i.e., near the surface) and under-represented by Gaussians trigger densification by cloning or splitting existing splats or seeding new Gaussians (Xiang et al., 2024Tourani et al., 15 Oct 2025). SDF-prioritized densification ensures geometric coverage, particularly in textureless or under-sampled regions.
  • Opacity-from-SDF Mapping: Bell-shaped or logistic SDF-to-opacity transforms (e.g., s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)3) link a Gaussian’s opacity to its SDF sample, enforcing layer localization directly in splatting (Zhu et al., 21 Jul 2025Lyu et al., 2024Li et al., 2024).
  • Covariance Regularization: Additional terms encourage the Gaussians’ shapes to flatten along the principal surface direction, spreading them as thin sheets for surface adherence (Li et al., 2024Zhang et al., 2024).

Combined, these mechanisms concentrate Gaussian splats precisely along the learned SDF surface, improving geometric faithfulness and rendering efficiency.

4. Projection-Based and Pulling Constraints for Surface Consistency

To circumvent the challenges of gradient-based Eikonal penalties or SDF norm enforcement in discretized settings, several works introduce projection-based constraints:

  • Projected Center Losses: Each Gaussian center is projected to the zero-level set via s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)4, with s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)5 a surface normal, and differences in rendered depth or geometry are penalized to enforce surface adherence (Zhu et al., 21 Jul 2025Zhang et al., 2024).
  • Pulling Gaussians to SDF Surface: At every optimization step, Gaussians are “pulled” along the SDF gradient to reside on s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)6. The operation

s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)7

ensures alignment between the explicit and implicit geometry during rendering (Zhang et al., 2024).

  • Eikonal-Like Constraints via Projections: Instead of computing s(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)8, the projection-based loss ensures first-order consistency between the splatted surface and the implicit SDF interface (Zhu et al., 21 Jul 2025).

Such strategies enable discretized SDFs to robustly constrain piecewise-sampled geometry representations without requiring full continuous SDF access throughout the volume.

5. Training Pipelines and Computational Characteristics

Hybrid GS+SDF models exhibit several jointly optimized phases:

  • Phase 1: Gaussian-only photometric training, possibly with smoothness or normal regularizations, warms up splats for gross appearance and geometry (Zhu et al., 2024).
  • Phase 2: SDF-only stage, often with frozen Gaussians, trains the grid or hash-based SDF with both supervised and mutual geometry losses.
  • Phase 3: Joint fine-tuning, with bidirectional depth, normal, and photometric agreements; SDF-guided pruning and densification occur alongside surface-projected constraints.

These pipelines are computationally efficient. For example, GS-RORs(p)=Θ(V(p),p)s(p) = \Theta(V(p), p)9 completes training in ≈1.5h on RTX 4090 (memory consumption: 100k Gaussians, pp0 SDF grid, small MLP; <4 GB), with rendering at 200+ FPS (Zhu et al., 2024). The introduction of a discretized SDF typically increases memory usage by a minor amount compared to pure 3DGS—e.g., GS-RORpp1 requires ≈22 GB, whereas discretized SDF baking in (Zhu et al., 21 Jul 2025) matches 3DGS’s 4 GB with no extra networks.

6. Applications, Limitations, and Empirical Performance

These hybrid models have been validated across multiple domains:

Empirical results demonstrate improved PSNR, SSIM, LPIPS, and Chamfer metrics against baselines. For example, (Zhu et al., 21 Jul 2025) reports PSNR 24.52 vs. 23.39 and CD 0.0107 vs. 0.0140 compared to leading hybrid methods on reflective Blender models, with half the memory footprint.

7. Comparative Table of Representative Approaches

Method SDF Discretization GS-SDF Coupling Mechanism Key Applications
GS-RORpp2 (Zhu et al., 2024) Grid+MLP (TensorSDF or trilinear) Bidirectional depth/normal, zero-set pull, pruning Reflective object relighting, mesh extraction
SurfaceSplat (Gao et al., 21 Jul 2025) Voxel grid + MLP (coarse/fine) Cyclic bootstrapping, image-based SDF refinement Surface reconstruction from sparse views
3DGSR (Lyu et al., 2024) Hash grid + MLP SDF-to-opacity, mutual rendering, joint losses High-fidelity mesh extraction, view synthesis
GSDF (Yu et al., 2024) Multi-resolution hash grid + MLP Depth localization, SDF-guided densification/pruning, mutual geometry supervision Fast rendering and mesh extraction
GS-Octree (Li et al., 2024) Octree vertex SDF SDF-to-alpha, opacity/scale regularization Lighting-robust geometry under specular highlights
SplatSDF (Li et al., 2024) Hash grid + SDF at anchor points Anchor-based embedding fusion at surface hits Accelerated SDF-NeRF training, high-fidelity geometry
GaussianRoom (Xiang et al., 2024) Grid SDF+MLP (periodically flushed) SDF-guided densification, pruning, monocular priors Indoor scene reconstruction
UGSDF (Tourani et al., 15 Oct 2025) Multi-resolution grid + MLP + hypernet SDF-guided adaptation, 2D-prior fusion Dynamic urban scene rendering

These methods establish that Gaussian splatting with discretized SDF not only advances geometry quality and computational scalability relative to prior pure implicit or pure explicit schemes but also enables relightable asset production and robust scene understanding across complex domains.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Gaussian Splatting with Discretized SDF.