Papers
Topics
Authors
Recent
Search
2000 character limit reached

Relightable 3D Gaussian Splatting

Updated 22 January 2026
  • Relightable 3D Gaussian Splatting is a rendering technique that uses a sparse cloud of learnable anisotropic Gaussian ellipsoids to encode geometry, material properties, and radiance transfer.
  • It fuses explicit geometry fitting, deferred shading, and advanced lighting models—using spherical harmonics, spherical Gaussians, or neural networks—to simulate complex light interactions including shadows and interreflections.
  • This representation supports applications ranging from high-fidelity head avatars to scalable scene reconstructions across indoor and outdoor environments, enabling interactive, photorealistic relighting.

A relightable 3D Gaussian Splatting representation is an image- and light-field adaptative rendering model that encodes both geometry and physically based appearance with anisotropic Gaussian primitives, enabling photorealistic, real-time relighting and novel-view synthesis for complex scenes and assets. Unlike classical mesh or volumetric models, this approach employs a sparse cloud of learnable Gaussian ellipsoids, each parameterized for geometry, material, and radiance transfer. By combining explicit geometry fitting, precomputed radiance transfer, deferred shading, and—where necessary—bidirectional spherical harmonics or neural reflectance models, these methods unify the efficiency of 3D Gaussian splatting with accurate lighting simulation and material decomposition. Recent advances enable not only high-fidelity head avatars able to resolve hair/facial microgeometry and eye glints but also scalable scene-level representations that support shadows, interreflection, global illumination, and material editing across indoor and outdoor domains.

1. Mathematical Foundations and Primitive Parameterization

A relightable 3D Gaussian primitive is defined by:

  • Center: μkR3\mu_k \in \mathbb{R}^3
  • Covariance: Σk=Rkdiag(sk)2Rk\Sigma_k = R_k \,\text{diag}(s_k)^2\,R_k^\top, with RkSO(3)R_k \in SO(3), skR3s_k \in \mathbb{R}^3 per-axis scales
  • Opacity: ok[0,1]o_k \in [0,1]
  • Material / Appearance: Including base albedo ρkR3\rho_k \in \mathbb{R}^3, roughness rkr_k, metalness mkm_k, radiance transfer coefficients, and (optionally) view/light-dependent SH, SG, or neural-MLP features

For a point xR3x \in \mathbb{R}^3, the unnormalized Gaussian density is: gk(x)=wkexp(12(xμk)Σk1(xμk))g_k(x) = w_k \exp\left(-\tfrac{1}{2} (x - \mu_k)^\top \Sigma_k^{-1} (x - \mu_k)\right)

Projection to the image plane uses the combined camera pose and Jacobian: the projected Gaussian center is μk\mu_k', and the covariance is Σk=JVΣkVJ\Sigma_k' = J V \Sigma_k V^\top J^\top. The per-pixel color CpC_p is composited front-to-back: Cp=k=1Nckαkj<k(1αj)C_p = \sum_{k=1}^{N} c_k \,\alpha_k\,\prod_{j<k}(1-\alpha_j) where αk\alpha_k is the screen-space opacity determined by the projected area and the learned oko_k.

Material attributes per Gaussian vary with the method but commonly include explicit physically based rendering (PBR) parameters (albedo, roughness, metallicity) and radiance transfer coefficients (SH for diffuse/global effects, SG or angular Gaussians for higher-frequency/lobe effects) (Saito et al., 2023, Choi et al., 2024, Guo et al., 2024).

2. Appearance and Relighting Models

To support accurate relighting, 3DGS representations decompose outgoing radiance into multiple components per Gaussian:

  • Diffuse Component: Captures view-independent energy, often parameterized using spherical harmonics (SH), representing both the illumination and the intrinsic per-Gaussian transfer function (occlusion, multi-bounce, subsurface scattering).

ckdiffuse=ρki=1(n+1)2Lidkic_k^\text{diffuse} = \rho_k \circ \sum_{i=1}^{(n+1)^2} L_i \circ d_k^i

where LiL_i are SH lighting coefficients and dkid_k^i are per-Gaussian transfer coefficients (Saito et al., 2023, Guo et al., 2024).

  • Specular Component: Modeled either by spherical Gaussian (SG) lobes aligned with the reflection axis, with learned width/visibility, or, in more expressive settings, as a mixture of anisotropic angular Gaussians or by bidirectional SH for full view- and light-dependence (Saito et al., 2023, Bi et al., 2024). For SG:

ckspecular(ωo)=vk(ωo)S2L(ωi)Gs(ωi;qk,σk)dωic_k^\text{specular}(\omega_o) = v_k(\omega_o) \int_{S^2} L(\omega_i) G_s(\omega_i; q_k, \sigma_k) d\omega_i

where qkq_k is the reflection axis, vkv_k is a visibility mask, and Gs()G_s(\cdot) is the normalized SG kernel.

  • Radiance Transfer Functions: For low-frequency environments or diffuse-dominant scenes, precomputed radiance transfer (PRT) expands the transfer kernel in SH basis:

Lo(x)=j=1Njtj(x)L_o(x) = \sum_{j=1}^N \ell_j t_j(x)

or, for glossy surfaces, an SH matrix that enables efficient dot-product evaluation at runtime (Guo et al., 2024).

  • Learned Neural Corrections: For domains with intricate subsurface, non-Lambertian, or volumetric effects, small neural networks (e.g., MLPs) are employed to regress the diffuse or global-illumination residuals as functions of light, view, and local Gaussian parameters (Kaleta et al., 2024, Bi et al., 2024).

3. Deferred Shading and Rendering Pipelines

Relighting within the 3DGS paradigm can be performed through either alpha compositing (classical splatting) or deferred shading pipelines:

  • Standard Splatting: Gaussians are rasterized in screen (or light) space by computing their projected 2D ellipses, followed by ordered front-to-back blending of their appearance, weighted by per-Gaussian opacity and (optionally) precomputed shadow or interreflection terms (Saito et al., 2023, Liao et al., 14 Sep 2025).
  • Deferred Rendering: Separates rasterization into a G-buffer stage, where all per-pixel geometric and material information is composited, and a screen-space shading pass that evaluates the BRDF under the current illumination (Choi et al., 2024, Wu et al., 2 Apr 2025, Zhu et al., 21 Jul 2025). This eliminates hidden-surface artifacts where Gaussians beneath the visible surface impact the result.
  • Ray-Tracing Integration: Some frameworks (e.g., RaySplats) merge Gaussians and triangle meshes in a BVH, allowing full ray-tracing for shadows, interreflection, and refraction, with differentiable index buffers supporting efficient backward passes (Byrski et al., 31 Jan 2025).
  • Triple Splatting: GS³ performs three passes—camera-space for shading, light-space for shadow accumulation, and a global-illumination pass—followed by compositing. Shadows and GI are refined via small neural networks conditioned on local Gaussian features (Bi et al., 2024).

4. Learning Strategies and Regularization

Effective relightable 3DGS requires robust geometry, material-light decomposition, and stability. Approaches include:

  • Geometry refinement: Using monocular normal priors, mesh extraction, or SDF-based projection constraints, methods regularize Gaussian placement to avoid blobby or misaligned surfaces. COREA employs dual-branch bidirectional supervision aligning SDF and Gs splat-cloud geometry (Zhu et al., 21 Jul 2025, Lee et al., 8 Dec 2025).
  • Material decomposition: Where possible, SH/PRT coefficients and per-Gaussian BRDF parameters are learned with multi-stage schedules, often starting with diffuse-only or flat-lit appearance and progressing to full view/light parameterization (Xu et al., 6 Jan 2026, Saito et al., 2023).
  • Opacity/scale regularization: Controls over opaque region coverage and per-Gaussian size keep the representation compact and conformant to surface shape (Choi et al., 2024).
  • Losses: Image-space L₁, SSIM, and photometric metrics dominate, but normal/curvature, opacity-masked, SDF-consistency, and radiance/SH smoothness losses provide crucial stabilization and disentanglement (Choi et al., 2024, Zhu et al., 21 Jul 2025, Lee et al., 8 Dec 2025).

5. Domain-Specific Adaptations and Extensions

Relightable 3DGS frameworks have been generalized for:

  • Head avatars and avatars with explicit dynamic expressivity: e.g., Relightable Gaussian Codec Avatars and RelightAnyone use expressive geometry+SH+SG parameterizations, CVAEs for animation, and explicit analytic eye models for high-fidelity glints and gaze control (Saito et al., 2023, Xu et al., 6 Jan 2026).
  • General assets and outdoor/indoor scenes: Adaptations to support sun/sky/indirect-light decomposition (e.g., ROSGS, GaRe), hybrid mesh+GS representations for precise occlusion, and large-scale asset pipelines with PBR material prediction (e.g., MGM, GRGS) (Liao et al., 14 Sep 2025, Sun et al., 27 May 2025, Ye et al., 26 Sep 2025).
  • Medical imaging: PR-ENDO customizes the appearance model to endoscopy, employing a diffuse MLP and camera-light coupled physically-based rendering (Kaleta et al., 2024).
  • Volumetric/anisotropic/unstructured materials: BiGS and GS³ drop or generalize normals and incorporate bidirectional SH or angular-Gaussian scattering for fluffy/fur/translucent objects (Liu et al., 2024, Bi et al., 2024).

6. Quantitative Performance, Applications, and Limitations

Most 3DGS relightable frameworks match or surpass previous real-time relighting and inverse-rendering methods in PSNR/SSIM/LPIPS, with real-time rendering rates (30–100+ fps) and training times ranging from less than an hour for scene reconstructions to a few hours for high-fidelity avatars (Saito et al., 2023, Bi et al., 2024, Guo et al., 2024). Explicit support for dynamic lighting, shadows, and view-dependent effects enables real-time VR avatars, scene relighting/editing, object transfer across scenes (TranSplat), and scene editing pipelines (Yu et al., 28 Mar 2025, Saito et al., 2023, Bi et al., 2024).

Limitations include low-frequency-truncation inherent to SH-only methods (blurry shadows, loss of sharp features), challenges for thin/transparent/refractive materials, out-of-distribution deterioration for hair/accessories or unusual geometry, and, for some pipelines, the need for special capture (e.g., OLAT) or per-scene optimization (Saito et al., 2023, Ye et al., 26 Sep 2025, Bi et al., 2024).

7. Comparative Table of Major Relightable 3DGS Methods

Method Geometry Appearance Model Lighting/BRDF Core Relighting Pipeline
Relightable Gaussian Codec Avatars (Saito et al., 2023) 3D Gaussians (deformable) SH diffuse, SG specular, learned transfer SH, SG, latent CVAE, explicit eye model Front-to-back compositing with radiance transfer
Phys3DGS (Choi et al., 2024) Hybrid mesh+GS PBR BRDF + indirect/vis SH Cook-Torrance/Disney BRDF Deferred rendering: G-buffer + screen shading
GS³ (Bi et al., 2024) 3D Gaussians Lambertian + mixture of angular Gaussians Custom reflectance, MLP refinement Triple splat: shading, shadow, GI
PRTGS (Guo et al., 2024) 3D Gaussians Precomputed SH transfer PRT kernel (SH), diffuse+glossy SH tensor contraction per splat
MGM (Ye et al., 26 Sep 2025) 2DGS PBR (a,ρ,m), no baked light Cook-Torrance Splat attributes + full BRDF integral
BiGS (Liu et al., 2024) 3D Gaussians Bidirectional SH, no normals Full bidir SH scattering SH composition, fully volumetric
PR-ENDO (Kaleta et al., 2024) 3D Gaussians Diffuse MLP, Cook-Torrance Endoscopy PBR, hybrid Alpha splat + per-splat MLP
TranSplat (Yu et al., 28 Mar 2025) 2DGS/3DGS Spherical Harmonics No explicit materials SH coefficient remapping per splat
RelightAnyone (Xu et al., 6 Jan 2026) 3DGS (head) SH transfer diffuse, SG specular Two-stage, latent UNets Cross-subject mapping, alpha compositing

All methods combine explicit Gaussian-based geometry with learnable physically based appearance and radiance transfer, harnessing efficient splatting or deferred shading for real-time and interactive relighting. The technical design—choice of SH/SG model, deferred/ray-traced rendering, explicit SDF or hybrid mesh incorporation, and neural extensions—adapts to the required fidelity, performance, and domain context.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Relightable 3D Gaussian Splatting Representation.