Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 145 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 127 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 433 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

Lab Color Space Gaussian Splatting

Updated 14 October 2025
  • Lab Color Space Gaussian Splatting is a method that integrates perceptually uniform Lab color models with 3D Gaussian primitives to enhance color fidelity and multi-view consistency.
  • It employs decomposable color models (M-Color) and unsupervised optimization to decouple illumination from intrinsic color, supporting robust low-light recovery.
  • Spatially adaptive and temporal deformation techniques further improve scene rendering in dynamic applications like robotics and medical imaging.

Lab Color Space Gaussian Splatting is a research direction and methodology in 3D scene representation and rendering that explores the fusion of perceptually linear color models—especially the CIE Lab space—with advanced variants of 3D Gaussian Splatting. These techniques are designed to achieve high-fidelity, physically plausible, and perceptually uniform representations of color and illumination under challenging conditions such as low light or dynamic surgery scenes. Recent work (Wang et al., 24 Mar 2025, Ji et al., 26 Aug 2025) leverages decomposable color models, unsupervised optimization strategies, spatially adaptive primitives, and advanced deformation models to maximize color fidelity, multi-view consistency, and computational performance in robotics and medical imaging contexts.

1. Foundations: Lab Color Space and 3D Gaussian Splatting

The CIE Lab color space is a perceptually uniform color model that separates luminance (LL^*, corresponding to lightness) from chromaticity (aa^* and bb^*, encoding color opponency). This separation allows for independent manipulation of luminance and color channels, aligning with human visual perception and making Lab especially suited for color correction and enhancement tasks.

3D Gaussian Splatting (3DGS) is a neural scene representation method that models a 3D scene as a collection of anisotropic Gaussian primitives, each associated with spatial, opacity, and color parameters. Rendering proceeds by projecting and compositing these Gaussians in image space, supporting real-time synthesis and view consistency.

Linking Lab color space principles with 3DGS typically involves:

  • Designing representations that decouple intrinsic object color from lighting effects, similar to the LL^* vs. a,ba^*,b^* dichotomy.
  • Applying non-linear luminance adjustments (e.g., gamma correction) that are conceptually and mathematically analogous to transformations in Lab.
  • Targeting enhancements in ways that respect perceptual uniformity, robustness to illumination variation, and geometric consistency.

2. Decomposable Color Representations: The M-Color Mechanism

A central innovation in this domain is the M-Color representation, introduced in the context of Low-Light Gaussian Splatting (LLGS) (Wang et al., 24 Mar 2025). Standard 3DGS approaches rely on Spherical Harmonics (SH) for encoding color per Gaussian, which fuses material and illumination information. Inspired by the Retinex theory, M-Color explicitly decomposes each Gaussian’s color into:

  • A view-independent component mim_i (material color or base reflectance).
  • A view-dependent component ωi\omega_i (illumination or lighting adjustment).

This yields a multiplicative composition,

ci=riωic_i = r_i \circ \omega_i

where cic_i is the rendered color, rir_i the base color, and ωi\omega_i the illumination coefficient. These are computed as follows:

  • fi=Ffeature(pi;θFfeature)f_i = F_\mathrm{feature}(p_i;\theta_{F_\mathrm{feature}}),
  • ωi=Flight(fi,vi;θFlight)\omega_i = F_\mathrm{light}(f_i, v_i; \theta_{F_\mathrm{light}}),
  • mi=Fcolor(fi;θFcolor)m_i = F_\mathrm{color}(f_i; \theta_{F_\mathrm{color}}),
  • enhancement: c^i=ri(Fenhance(ωi))\hat{c}_i = r_i \circ (F_\mathrm{enhance}(\omega_i)).

By acting only on the illumination term, the method enables selective enhancement (e.g., for low-light recovery) without distorting intrinsic scene chromaticity, paralleling operations on LL^* in Lab space.

3. Unsupervised Optimization and Multi-View Consistency

A persistent challenge for color enhancement in multi-view 3D settings is enforcing consistency without paired reflectance/illumination ground truth. LLGS employs unsupervised optimization driven by zero-knowledge priors such as the Gray World Assumption, which stipulates that the average gray value in an image should match a reference gray. Losses are formulated as:

  • Color loss:

Lc=E[(c^e)2]+λ1E[var(c^)/(β1+varc(r))]+λ2γ2\mathcal{L}_c = \mathbb{E}\left[ (\hat{c} - e)^2 \right] + \lambda_1 \mathbb{E}\left[ \mathrm{var}(\hat{c}) / (\beta_1 + \mathrm{var}_c(r)) \right] + \lambda_2 \| \gamma \|_2

  • Gradient preservation loss:

Lg=SSIM(G(Ie),G(I))\mathcal{L}_g = \mathrm{SSIM}(G(I_e), G(I))

where G()G(\cdot) denotes a Sobel-based gradient operator, enforcing edge consistency and sharpness.

The optimization is joint over all camera views: direction-based enhancement arises since ωi\omega_i is modulated by the current camera pose. This guarantees that enhancement aligns across viewpoints, crucial for applications such as SLAM and multi-view 3D reconstruction.

4. Spatially Adaptive and Temporal Color Encoding

In ColorGS (Ji et al., 26 Aug 2025), limitations of fixed per-Gaussian color are addressed via Colored Gaussian Primitives and dynamic anchor-based encoding. Each Gaussian primitive is associated with kk spatial color anchors Ai=(Aix,Aiy)A_i = (A^x_i, A^y_i), each with learnable color cic_i. For a rendering point p=(u,v)p = (u, v):

  • Anchor influence:

FAi(p)=exp(λepAi2)F_{A_i}(p) = \exp(-\lambda_e \|p - A_i\|^2)

  • Aggregated anchor contribution:

Fc(p)=i=0k1FAi(p)ciF_c(p) = \sum_{i=0}^{k-1} F_{A_i}(p) c_i

  • Final color:

c(p,d)=SH(d)+Fc(p)c(p, d) = SH(d) + F_c(p)

Spatial adaptivity is thus achieved by modulating color based on 2D position relative to dynamic anchor locations, which allows the system to recover subtle texture and lighting variation. This approach is effective for surgical scene reconstruction, where local differences matter.

5. Deformation Modeling in Dynamic Scenes

For dynamic reconstruction—particularly in endoscopic and surgical environments—robust modeling of both local and global deformations is required. The Enhanced Deformation Model (EDM) in ColorGS handles this by a dual mechanism:

  • Time-aware Gaussian basis functions:

b~(t;θj,σj)=exp((tθj)22σj2)\tilde{b}(t; \theta_j, \sigma_j) = \exp\left(-\frac{(t - \theta_j)^2}{2 \sigma_j^2}\right)

  • Deformation synthesis (e.g., for xx-coordinate):

ψx(t,Θx)=j=0B1ωjxb~(t;θjx,σjx)+δx\psi^x(t, \Theta^x) = \sum_{j=0}^{B-1} \omega^x_j \tilde{b}(t; \theta^x_j, \sigma^x_j) + \delta_x

This construction separates smooth global shifts (δx\delta_x) from localized, time-varying deformation, supporting both high geometric fidelity and temporal consistency.

6. Experimental Results and Performance Analysis

Empirical evaluations demonstrate that both LLGS and ColorGS achieve state-of-the-art results on relevant benchmarks:

Method PSNR (dB) SSIM (%) Rendering Speed (FPS) Feature Matching (%)
LLGS up to 0.95 (SSIM) 136 85.3
ColorGS 39.85 97.25 real time
LLNeRF lower lower lower

LLGS improves training time (47 min vs. 246 min for LLNeRF) and memory usage, while increasing perceptual and multi-view consistency. Feature-based tasks such as SLAM see direct benefits, with higher matching rates in LLGS reconstructions versus baselines.

7. Conceptual and Practical Implications

The formulations in LLGS and ColorGS reveal strong conceptual resonance with Lab color space principles:

  • The explicit decoupling of luminance and chromaticity (as in M-Color and anchor-based models) enables operations analogous to LL^*-channel gamma correction and a,ba^*,b^*-channel chromatic adjustment.
  • Gamma correction in these pipelines,

Ig(x,y)=AI(x,y)γI_g(x, y) = A \cdot I(x, y)^\gamma

acts as a non-linear luminance transformation comparable to Lab-space lightness operations.

This suggests practical avenues for integrating Lab-based techniques within Gaussian Splatting frameworks, especially for tasks necessitating perceptual or colorimetric consistency—such as global color correction under variable illumination, robust feature matching, and perceptually motivated rendering. Applications are evident in robotics, AR/VR, intraoperative guidance, and any context requiring rapid, consistent scene reconstruction under challenging visual conditions.

A plausible implication is that future models may explicitly use Lab representations, or design learnable splits of luminance and chromaticity in the spirit of Lab, to further bridge perceptual modeling with high-performance 3D scene synthesis.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Lab Color Space Gaussian Splatting.