Papers
Topics
Authors
Recent
2000 character limit reached

Deep Gaussian Shadow Maps (DGSM)

Updated 6 January 2026
  • DGSM is a volumetric shadow mapping method that computes view-dependent shadows via analytic closed-form integration using anisotropic Gaussians.
  • It employs Gaussian density proxies from neural 3D representations like NeRF and 3DGS to enable efficient, mesh-free, and real-time shadow and relighting computations.
  • DGSM uses precomputed atlas-based radial shells and octahedral mapping for fast lookup, achieving high performance in dynamic multi-light and animated scenes.

Deep Gaussian Shadow Maps (DGSM) are a class of shadowing and relighting techniques designed for volumetric neural 3D representations, particularly Gaussian Splatting (3DGS) and NeRF-derived density proxies. DGSM enables the casting and receiving of consistent, view-dependent shadows and dynamic relighting directly within continuous Gaussian domains. Unlike classical shadow mapping, DGSM leverages analytic closed-form integration through anisotropic Gaussians and high-performance atlas structures, providing scalable, mesh-free volumetric shadow computation suitable for animated avatars and objects in neural or captured scenes. The method supports real-time rendering and robust shadow effects for dynamic interactions without explicit geometry meshing or voxelization, yielding environment-consistent shading and light transfer in a fully differentiable pipeline (Bolanos et al., 2024, Mir et al., 4 Jan 2026).

1. Mathematical Foundations and Analytic Occlusion Integrals

DGSM builds on the observation that volumetric absorption along a ray passing through a sum of 3D Gaussian density proxies admits closed-form evaluation. For a set of anisotropic Gaussians with centers μi\mu_i, covariances Σi\Sigma_i, and absorption coefficients βi\beta_i, the absorption field is

σ(x)=i  βiexp[12(xμi)TΣi1(xμi)].\sigma(x) = \sum_{i}\; \beta_i\, \exp\left[-\frac12\,(x-\mu_i)^T\,\Sigma_i^{-1}\,(x-\mu_i)\right].

The optical depth τ\tau along a ray r(t)=o+tdr(t) = o_\ell + t d is

τ(d,t)=0tσ(o+sd)  ds=i  βi0texp[12(ais2+2bis+ci)]ds,\tau(d, t) = \int_0^t \sigma(o_\ell + s d)\; ds = \sum_{i}\; \beta_i\, \int_0^t \exp\left[-\frac12\,(a_i s^2 + 2b_i s + c_i)\right] ds,

with ai=dTΣi1da_i = d^T \Sigma_i^{-1} d, bi=dTΣi1(oμi)b_i = d^T \Sigma_i^{-1} (o_\ell - \mu_i), ci=(oμi)TΣi1(oμi)c_i = (o_\ell - \mu_i)^T \Sigma_i^{-1} (o_\ell - \mu_i). The integral is expressible using the error function erf()\operatorname{erf}(\cdot). The transmittance (shadow factor) is computed as

T(d,t)=exp[τ(d,t)].T(d, t) = \exp[-\tau(d, t)].

This analytic formulation avoids sampling-based approximations, permitting exact, differentiable computation of shadows for any query point and light source.

2. Gaussian Density Proxy Fitting and Scene Representation

DGSM leverages a Gaussian mixture proxy fitted to either a learned NeRF density or a direct 3DGS decomposition. For articulated avatars, J×KJ\times K Gaussians are attached to a skeleton with JJ joints and KK Gaussians per joint (Bolanos et al., 2024). Each Gaussian component ii has parameters (ρi,μi,Σi)(\rho_i, \mu_i, \Sigma_i), where Σi\Sigma_i is constructed from scale and rotation,

Σi=Rdiag(σx2,σy2,σz2)RT,\Sigma_i = R\, \operatorname{diag}(\sigma_x^2, \sigma_y^2, \sigma_z^2) R^T,

and ρi\rho_i is the density amplitude. Parameters are optimized to minimize the mean squared error against the underlying volumetric density field,

Lgauss=ExG(x)D(x)2.L_\text{gauss} = \mathbb{E}_x \| G(x) - D(x) \|^2.

Opacities αi\alpha_i are calibrated to absorption coefficients βi\beta_i (e.g., βi=κτitr(Σi1)/3/2π\beta_i = \kappa\, \tau_i^\star\, \sqrt{\operatorname{tr}(\Sigma_i^{-1})/3} / \sqrt{2\pi}, with τi=ln(1αi)\tau_i^\star = -\ln(1-\alpha_i) and κ\kappa as a global shadow strength).

3. Tabulated Shadow Maps: Radial Shell Discretization and Octahedral Atlases

For real-time applications, DGSM precomputes transmittance tables indexed by radial shells and spherical directions. For each light source, ray directions dd are discretized using octahedral mapping ψ(d)(u,v)\psi(d) \rightarrow (u, v), populating a 3D texture T[u,v,k]T[u,v,k] over KK radial shells and H×WH \times W angular directions (Mir et al., 4 Jan 2026). Culling strategies restrict computation to regions-of-interest (ROIs) defined by the receiver geometry and light visibility, with per-pixel occluder binning via covariance-projected ellipses in atlas space. This approach provides low-latency lookup; at runtime, the transmittance for a splat at xsx_s with respect to light oo_\ell is

ds=xsoxso,ts=xso,T=trilinearSample(T[],ψ(ds),ts).d_s = \frac{x_s - o_\ell}{\|x_s - o_\ell\|}, \quad t_s = \|x_s - o_\ell\|, \quad T_\ell = \text{trilinearSample}(T_\ell[\cdot], \psi(d_s), t_s).

This design ensures shadow computation is efficient, memory-bound, and hardware-accelerated.

4. Integration with Neural Rendering and Spherical Harmonic Relighting

DGSM is compatible with deferred neural rendering pipelines and 3DGS splatting (Bolanos et al., 2024, Mir et al., 4 Jan 2026). In deferred schemes, a two-stage approach is used: (1) sample primary rays through the NeRF or SDF field to recover depth, albedo, and normal; (2) compute analytic shadow transmittance along secondary rays to each light direction, then perform Lambertian or more general shading. For animated avatars and dynamic scenes, environment illumination is approximated using spherical harmonic (SH) probes fit to HDRI cubemaps:

minARK×3W1/2(BAY)F2+λAF2,\min_{A \in \mathbb{R}^{K \times 3}} \| W^{1/2} (B A - Y) \|_F^2 + \lambda \| A \|_F^2,

using solid-angle-corrected samples. Per-Gaussian radiance transfer is performed via cosine lobes with numerical integration for diffuse and glossy effects,

S(ω,n)=max(0,ωn)q,S(\omega, n) = \max(0, \omega \cdot n)^q,

with resultant relit color c=max(0,γcs(n))c' = \max(0, \gamma c \odot s(n)).

5. Algorithmic Workflow and Implementation

Both DGSM variants adhere to an algorithmic scheme comprising precomputation and runtime passes:

Precomputation (for dynamic lights or avatars):

  • Estimate light positions via photometric cues and SH analysis
  • Identify receiver ROI for shadow map construction
  • For each light, discretize directions/radial bins, bin occluders per atlas tile, and evaluate closed-form integrals, populating T[u,v,k]T_\ell[u,v,k]

Runtime Evaluation:

  • Fit SH probe at avatar position from HDRI cubemap
  • For each Gaussian, perform diffuse/glossy radiance transfer using SH coefficients
  • For each Gaussian splat, look up transmittance from precomputed tables for each light
  • Attenuate color by cumulative transmittance, then perform 3DGS splatting or neural rendering

Pseudocode for both stages is supplied in (Mir et al., 4 Jan 2026) and (Bolanos et al., 2024).

6. Computational Complexity and Practical Performance

DGSM’s closed-form analytic evaluation reduces quadratic O(Nsamp2)O(N_\text{samp}^2) secondary-ray cost typical in NeRF-based shadowing to O(K)O(K) per pixel, where KK is the number of fitted Gaussians (in practice, K200K \approx 200 versus Nsamp64N_\text{samp} \approx 64). GPU-accelerated atlas lookup and SH relighting yield full-frame performance (30 fps for 10–20K Gaussians): shadow map build time \sim0.13 s/frame (with culling); runtime shadow/relighting \leq10 ms per frame. Relative to sampling-based methods, DGSM incurs only 2% overhead (17.47s vs. 17.13s baseline) rather than 25% (21.4s for NeRFSC) (Bolanos et al., 2024). For multi-light and HDRI scenes, DGSM remains scalable.

7. Empirical Results and Applications

DGSM elevates shadow and relighting realism for neural avatars in both synthetic and captured scenes. Quantitative metrics on novel-pose rendering yield PSNR improvement (17.2dB19.3dB17.2\,\mathrm{dB} \to 19.3\,\mathrm{dB}), SSIM (0.6480.6660.648 \to 0.666), and LPIPS (0.2830.2910.283 \to 0.291) (Bolanos et al., 2024). Hard self-casting shadow subsets demonstrate robust artifact reduction. For animated avatars composited into ScanNet++, DL3DV, and SuperSplat environments, DGSM delivers coherent, soft shadows and dynamic relighting consistent across scene and inserted objects (Mir et al., 4 Jan 2026). Ablation experiments confirm contributions of diffuse shading, light optimization, and normal gradient retention.

A plausible implication is that DGSM’s analytic and atlas-driven paradigm supports future integration of physically-based light transport and more expressive material models without compromising real-time performance or differentiability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Deep Gaussian Shadow Maps (DGSM).