Papers
Topics
Authors
Recent
2000 character limit reached

Reflection-Sensitive Gaussian Pruning

Updated 10 December 2025
  • The paper introduces a gradient-based method that computes an importance score (I₍ᵢ₎) by blending base and reflection loss gradients to decide which Gaussian splats to prune.
  • It achieves significant efficiency gains by adaptively removing low-impact splats, reducing model size nearly 4× and accelerating inference up to 7× without major quality losses.
  • The strategy balances preserving photorealistic reflections with computational savings by integrating both gradient sensitivity and SDF-aware spatial criteria.

Reflection-sensitive Gaussian pruning refers to model reduction strategies in 3D Gaussian splatting pipelines where the decision to remove basis functions (Gaussians) is explicitly driven by their quantitative impact on both base and reflection image contributions. Such pruning is especially vital for realistic novel view synthesis in scenes featuring complex specular phenomena, where preserving splats that control reflections is essential for photorealism, while also accelerating inference and reducing memory. The two most prominent methods are the gradient-based reflection-sensitive pruning of HybridSplat and the SDF-based spatial pruning of GS-ROR2^2.

1. Theoretical Formulation

Reflection-sensitive Gaussian pruning in HybridSplat (Liu et al., 9 Dec 2025) is fundamentally gradient-based. The model reconstructs two image branches per view: a “base” color cbase(p)\mathbf{c}_{\text{base}}(\mathbf{p}) and a “reflection” color cref(p)\mathbf{c}_{\text{ref}}(\mathbf{p}), for each pixel p\mathbf{p}. The final output is their convex combination: I^(p)=(1β)cbase(p)+βcref(p),\hat{I}(\mathbf{p}) = (1-\beta)\,\mathbf{c}_{\text{base}}(\mathbf{p}) + \beta\,\mathbf{c}_{\text{ref}}(\mathbf{p}), where β\beta is a global reflection blend factor.

The reconstruction loss is standard RGB 2\ell_2 loss over all pixels plus optional regularization. Each 3D Gaussian gig_i (base or reflective) is scored by the squared gradient magnitude of the loss with respect to its parameters: Bi=giL2(giGbase),Ri=giL2(giGref)B_i = \|\nabla_{g_i} \mathcal{L}\|^2 \quad (g_i \in G_{\text{base}}), \qquad R_i = \|\nabla_{g_i} \mathcal{L}\|^2 \quad (g_i \in G_{\text{ref}}) The unified reflection-sensitive importance is: Ii=(1β)Bi+βRiI_i = (1-\beta) B_i + \beta R_i This scalar IiI_i quantifies the influence of gig_i on the blended output, assigning reflection splats increased relevance where β\beta is large. Gaussians with Ii<τI_i < \tau (user-set threshold) or among the lowest p%p\% are pruned.

2. Reflection-Sensitive Pruning Algorithm

Pruning in HybridSplat operates either periodically during training or as post-processing. Each step computes the loss gradients via backpropagation, then evaluates IiI_i for each Gaussian. Pseudocode for adaptive threshold-based pruning:

1
2
3
4
5
6
7
8
9
10
11
12
for iteration in range(1, MaxIter+1):
    render_base = rasterize(G_base, camera)
    render_ref  = rasterize(G_ref,  camera)
    L = sum_p( (1-β)*render_base(p) + β*render_ref(p) - I_gt(p) )**2 + regularizers
    backpropagate(L)
    if iteration % Freq == 0:
        for g_i in G_base + G_ref:
            B_i = norm2(grad_{g_i} L) if g_i in G_base else 0
            R_i = norm2(grad_{g_i} L) if g_i in G_ref  else 0
            I_i = (1-β)*B_i + β*R_i
        Remove all g_i with I_i < τ  # or prune lowest p% by I_i
    optimizer.step()  # e.g., Adam

Aggressiveness is controlled by τ\tau or percentile pp; e.g., discarding 5% of splats every 500 iterations. The inclusion of the βRi\beta R_i term is mandatory to preserve highlight splats.

HybridSplat schedules pruning after an initial warm-up—allowing base-reconstruction before culling. Once pruned, its tile-based data structures are rebuilt. The trained/pruned model then supports real-time inference with high reflection quality. Reflection-sensitive pruning ties directly to gradient feedback from both branches, ensuring that specular contributors are retained in proportion to their optical effect.

Conversely, GS-ROR2^2 (Zhu et al., 22 May 2024) utilizes an SDF-aware pruning protocol complementary to deferred shading. After bidirectional supervision aligns Gaussian and SDF-predicted depths/normals, each Gaussian's central location μi\mu_i is queried for its signed distance si=SDF(μi)s_i = \operatorname{SDF}(\mu_i). A masking function

Mi=1[sisε],M_i = \mathbf{1}[\,|s_i| \leq s_\varepsilon\,],

with sεs_\varepsilon derived from a density falloff ϕs(sε)=0.01\phi_s(s_\varepsilon) = 0.01, determines inclusion. Floaters (Mi=0M_i=0) are pruned in each cycle, tightening as SDF sharpens, ensuring all retained Gaussians belong to the true surface.

4. Hyperparameter Selection and Trade-Offs

Reflection-sensitive pruning exposes trade-offs between model size, rendering speed, and scene reconstruction fidelity:

  • Higher thresholds or larger prune fractions (τ\tau or pp) yield fewer splats, proportionally increasing speed and reducing memory usage. However, excessive pruning leads to loss of fine reflection features, visible as reduced highlight quality or “dull” reflections.
  • Lower thresholds are more conservative, preserving image fidelity but limiting acceleration.
  • Best practices: prune p=5%p=5\% every 500 iterations and monitor held-out PSNR/SSIM, lowering pp if quality drops by more than $0.1$ dB. For real-time, p=10%p=10\% can yield 10×10\times compression with tolerable (sub $0.5$ dB) PSNR loss.
  • In scenes with subtle specularities, smaller β\beta is recommended to avoid under-representation of reflective details.

5. Quantitative Evaluation and Empirical Effects

HybridSplat’s reflection-sensitive pruning achieves substantial resource savings on complex reflective datasets. On Ref-NeRF and NeRF-Casting, measured against EnvGS (no pruning), the model size falls from approximately $1.4$ M to $0.386$ M Gaussians (almost 4×4\times reduction), and inference speed increases from $15$ FPS to $107$ FPS (7×7\times speedup) on a single RTX4090. Quality metrics remain high: PSNR drops by less than $0.4$ dB (EnvGS $30.21$ dB, HybridSplat $29.87$ dB), SSIM is $0.864$ vs. $0.872$, and LPIPS slightly increases, reflecting negligible visual degradation for major computational gain (Liu et al., 9 Dec 2025).

For GS-ROR2^2, introducing SDF-aware pruning after mutual supervision improves mean PSNR by $0.2$ dB and SSIM by $0.002$ (to $23.31$/ $0.9376$), while removing floaters that degrade relighting quality. The overhead of SDF-based pruning is minimal, with real-time rendering (>200>200 FPS) and only a $0.5$ h increase in total training time (Zhu et al., 22 May 2024).

System Pruning approach Splats (#, Ref-NeRF) FPS (RTX4090) PSNR/SSIM
EnvGS No pruning 1.4M 15 30.21/0.872
HybridSplat Reflection-sensitive 0.386M 107 29.87/0.864
GS-ROR2^2 SDF-aware (auto, scene dep.) >200>200 23.31/0.9376

6. Practical Guidelines and Limitations

Reflection-sensitive pruning mandates the inclusion of the βRi\beta R_i term; omitting it preferentially removes splats providing key highlights. Best practice is to track validation PSNR, reducing prune rate if drops exceed $0.2$ dB. In scenes dominated by diffuse content, β\beta should be decreased to avoid over-pruning specular splats. Aggressive pruning (e.g., p=10%p=10\%) is suitable only if mild degradation (0.5\sim0.5 dB) is permissible.

In SDF-aware pruning as in GS-ROR2^2, tying the mask threshold sεs_\varepsilon to the dynamic shape of ϕs\phi_s ensures adaptivity: early in training, loose thresholds support geometric exploration, while later, tight thresholds guarantee surfacic fidelity.

A plausible implication is that reflection-sensitive pruning frameworks represent a general trend toward application-specific, task-aware model compression for neural rendering, where importance weighting directly tracks physically meaningful image contributions and not just generic error gradients.

7. Relationship to Broader Neural Rendering and Model Reduction Techniques

Reflection-sensitive pruning as pioneered in HybridSplat and GS-ROR2^2 extends standard importance-based or spatial pruning to photorealistic, reflection-rich scene reconstruction, where naive methods would irrecoverably damage specular realism or introduce geometric floaters. The methodology integrates tightly into hybrid or deferred splatting pipelines and leverages physically rooted blend ratios, as well as signed distance functions for geometric regularization.

HybridSplat demonstrates that coupling importance scores across multi-branch rendering pipelines (by blending per-branch sensitivities) is critical for artifact-free, efficient novel view synthesis in challenging reflective environments. GS-ROR2^2 shows mutual-supervision with SDFs both prunes geometric outliers and leads to sharper normals and specular highlights, all without runtime SDF dependence (Liu et al., 9 Dec 2025, Zhu et al., 22 May 2024).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Reflection-Sensitive Gaussian Pruning.