Papers
Topics
Authors
Recent
2000 character limit reached

Extreme Views: 3DGS Filter for Novel View Synthesis from Out-of-Distribution Camera Poses (2510.20027v1)

Published 22 Oct 2025 in cs.CV and cs.GR

Abstract: When viewing a 3D Gaussian Splatting (3DGS) model from camera positions significantly outside the training data distribution, substantial visual noise commonly occurs. These artifacts result from the lack of training data in these extrapolated regions, leading to uncertain density, color, and geometry predictions from the model. To address this issue, we propose a novel real-time render-aware filtering method. Our approach leverages sensitivity scores derived from intermediate gradients, explicitly targeting instabilities caused by anisotropic orientations rather than isotropic variance. This filtering method directly addresses the core issue of generative uncertainty, allowing 3D reconstruction systems to maintain high visual fidelity even when users freely navigate outside the original training viewpoints. Experimental evaluation demonstrates that our method substantially improves visual quality, realism, and consistency compared to existing Neural Radiance Field (NeRF)-based approaches such as BayesRays. Critically, our filter seamlessly integrates into existing 3DGS rendering pipelines in real-time, unlike methods that require extensive post-hoc retraining or fine-tuning. Code and results at https://damian-bowness.github.io/EV3DGS

Summary

  • The paper introduces a two-pass, gradient-based sensitivity filter that eliminates artifacts in 3D Gaussian Splatting for views with out-of-distribution camera poses.
  • It computes per-ray sensitivity scores using a rotation-aligned gradient formulation to identify and suppress unstable primitives during rendering.
  • The method achieves superior perceptual quality with lower error metrics, enabling efficient real-time synthesis in challenging, unconstrained environments.

Sensitivity-Based Filtering for Robust 3D Gaussian Splatting under Out-of-Distribution Camera Poses

Introduction

This paper addresses a critical limitation in 3D Gaussian Splatting (3DGS) for novel view synthesis: the emergence of severe rendering artifacts when camera poses deviate significantly from the training distribution. Such artifacts—floating primitives, inconsistent geometry, and view-dependent noise—arise due to insufficient multi-view constraints during training, resulting in poorly-constrained Gaussian primitives. Existing solutions, such as retraining with additional regularization or post-hoc uncertainty quantification, are computationally expensive and impractical for real-time deployment. The authors propose a real-time, render-aware filtering method that operates during rendering, leveraging gradient-based sensitivity analysis to identify and suppress unstable 3D primitives, particularly those exhibiting anisotropic orientation-induced instabilities.

Technical Contributions

The core technical innovation is a two-pass, gradient-based sensitivity filter that operates at render time. The method computes per-ray, per-Gaussian sensitivity scores by analyzing the gradient of the rendered pixel color with respect to spatial perturbations of the Gaussian primitive. This analysis is performed in a rotation-aligned coordinate system, isolating directional instabilities due to anisotropic orientations. The filter consists of:

  1. Gradient Sensitivity Computation: For each ray-Gaussian intersection, the method calculates the gradient of the composite color with respect to the 3D position, focusing on the transmittance dynamics and decoupling from color-specific variation. The gradient is evaluated in the Gaussian's rotation-aligned space, emphasizing orientation effects.
  2. Two-Pass Filtering: In the first pass, intersections are accepted or rejected based on a sensitivity threshold. In the second pass, Gaussians with a high rejection ratio (aggregate sensitivity) are excluded from rendering for the current viewpoint.
  3. Ray-Marching Integration: The filter is implemented within a ray-marching pipeline, enabling precise control over ray-Gaussian interactions and facilitating efficient, render-time sensitivity analysis.

This approach directly targets the core source of generative uncertainty—directional instability from anisotropic Gaussian primitives—without requiring retraining or modification of the underlying 3DGS reconstruction pipeline.

Methodological Details

3DGS Representation and Rendering

3DGS models radiance fields as a set of explicit 3D Gaussian primitives, each parameterized by color, opacity, mean, and covariance. The covariance is decomposed into scale and rotation matrices, representing the Gaussian as an ellipsoid. Rendering involves depth-sorting and alpha compositing of projected Gaussians onto the image plane.

Sensitivity Analysis

The sensitivity score for each Gaussian is derived from the gradient of the composite color with respect to spatial perturbations, expressed as:

C(x)=k=1Kckakj=1k1(1aj)(j=1k1ajΣj1xj1ajΣk1xk)\nabla C(x) = \sum_{k=1}^K c_k a_k \prod_{j=1}^{k-1}(1 - a_j) \left( \sum_{j=1}^{k-1} \frac{a_j \Sigma_j^{-1} x_j}{1 - a_j} - \Sigma_k^{-1} x_k \right)

To focus on structural sensitivity, the color vector is replaced with a scalar, and the gradient is computed in the rotation-aligned space:

S=k=1Kakj=1k1(1aj)(j=1k1ajxj1ajxk)S = \sum_{k=1}^K a_k \prod_{j=1}^{k-1}(1 - a_j) \left( \sum_{j=1}^{k-1} \frac{a_j x_j}{1 - a_j} - x_k \right)

This formulation isolates rotational sensitivity, highlighting Gaussians prone to view-dependent artifacts due to poor orientation.

Filtering Pipeline

  • First Pass: For each ray-Gaussian intersection, compute sensitivity and accept/reject based on a threshold.
  • Second Pass: For each Gaussian, compute the rejection ratio; exclude Gaussians exceeding a user-defined threshold from rendering.

This pipeline preserves detail-carrying, stable Gaussians while suppressing unstable, artifact-inducing primitives.

Experimental Results

The method is evaluated on Deep Blending and NeRF On-the-go datasets, using a modified Nerfstudio Splatfacto pipeline with ray-marching and the proposed filter. Perceptual quality is assessed using NR-IQA metrics (NIQE, BRISQUE, PIQE), with lower scores indicating better quality. Across all scenes and metrics, the proposed filter achieves the lowest scores, outperforming BayesRays—a NeRF-based uncertainty filtering baseline. The filter effectively suppresses anisotropy-induced artifacts in extreme OOD views without over-smoothing, maintaining high visual fidelity and geometric consistency.

Ablation Studies

  • Single-Pass Filtering: Filtering only at the intersection level fails to remove artifact-prone Gaussians near their centers due to low gradient magnitudes.
  • Scale-Incorporated Gradients: Including scale in the gradient calculation normalizes anisotropic Gaussians, leading to loss of detail and over-filtering of small, detail-critical primitives.
  • Parameter Sensitivity: Optimal performance requires scene-specific tuning of sensitivity and rejection thresholds; using only one parameter yields suboptimal perceptual quality.

Theoretical and Practical Implications

The proposed sensitivity filter provides a fast, render-time proxy for epistemic uncertainty, analogous to the Fisher Information Matrix but computed per-intersection in the rotation-aligned space. This enables real-time suppression of artifacts without retraining, making the approach suitable for interactive applications and deployment scenarios with unpredictable camera trajectories. The method bridges a gap in uncertainty quantification for explicit volumetric representations, addressing directional instabilities that are inadequately handled by isotropic or redundancy-based filters.

Future Directions

Potential future developments include:

  • Adaptive Thresholding: Automated, data-driven selection of sensitivity and rejection thresholds to optimize perceptual quality across diverse scenes.
  • Integration with Active View Selection: Leveraging sensitivity scores for dynamic view planning and data acquisition in 3D reconstruction.
  • Extension to Other Explicit Representations: Adapting the filter to other explicit volumetric or point-based rendering techniques.
  • Hardware Acceleration: Optimizing the filter for GPU-based real-time rendering pipelines.

Conclusion

This work introduces a real-time, gradient-based sensitivity filter for 3DGS that robustly suppresses anisotropic, orientation-induced artifacts in novel view synthesis from out-of-distribution camera poses. The method operates entirely at render time, requiring no retraining or modification of the reconstruction pipeline, and achieves superior perceptual quality compared to existing NeRF-based uncertainty filters. The approach advances the practical deployment of 3DGS in interactive and unconstrained viewing scenarios, with implications for uncertainty quantification and robust scene reconstruction in computer vision and graphics.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We found no open problems mentioned in this paper.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 62 likes.

Upgrade to Pro to view all of the tweets about this paper:

Reddit Logo Streamline Icon: https://streamlinehq.com