Papers
Topics
Authors
Recent
2000 character limit reached

Moment-Based 3D Gaussian Splatting: Resolving Volumetric Occlusion with Order-Independent Transmittance (2512.11800v1)

Published 12 Dec 2025 in cs.CV and cs.GR

Abstract: The recent success of 3D Gaussian Splatting (3DGS) has reshaped novel view synthesis by enabling fast optimization and real-time rendering of high-quality radiance fields. However, it relies on simplified, order-dependent alpha blending and coarse approximations of the density integral within the rasterizer, thereby limiting its ability to render complex, overlapping semi-transparent objects. In this paper, we extend rasterization-based rendering of 3D Gaussian representations with a novel method for high-fidelity transmittance computation, entirely avoiding the need for ray tracing or per-pixel sample sorting. Building on prior work in moment-based order-independent transparency, our key idea is to characterize the density distribution along each camera ray with a compact and continuous representation based on statistical moments. To this end, we analytically derive and compute a set of per-pixel moments from all contributing 3D Gaussians. From these moments, a continuous transmittance function is reconstructed for each ray, which is then independently sampled within each Gaussian. As a result, our method bridges the gap between rasterization and physical accuracy by modeling light attenuation in complex translucent media, significantly improving overall reconstruction and rendering quality.

Summary

  • The paper introduces a moment-based transmittance formulation that enables order-independent volumetric rendering, resolving occlusion issues in complex scenes.
  • It employs power-transform warping and statistical moment integration to ensure numerical stability and accurate light transmittance without costly ray tracing.
  • The differentiable design with adaptive densification improves reconstruction quality, effectively capturing specular highlights, occlusions, and overlapping geometries.

Moment-Based 3D Gaussian Splatting: Order-Independent Volumetric Rendering with Moments

Background and Motivation

Recent advances in novel view synthesis have relied on two paradigms: implicit neural volumetric fields, such as NeRF, and more recently, explicit point-based representations such as 3D Gaussian Splatting (3DGS). While 3DGS enables real-time radiance field synthesis using explicit 3D Gaussians as primitives and achieves substantial rendering efficiency, it typically applies simplified alpha blending and relies on rasterization order, sacrificing the physical accuracy of volumetric integration. This leads to fundamental limitations when rendering scenes with semi-transparent or overlapping structures, as correct light attenuation and occlusion cannot be captured.

Alternative approaches have incorporated ray tracing for physically accurate volumetric integration with Gaussian primitives, but this comes at significant computational cost. Some rasterization-based methods attempt to regain certain volumetric properties but require correct ordering or resort to approximations that still fall short for complex visual scenarios.

The proposed Moment-Based 3D Gaussian Splatting (MB3DGS) addresses these deficiencies by realizing physically-accurate, order-independent volumetric rendering wholly within a high-performance rasterization pipeline through a moment-based transmittance formulation.

Methodology

Moment-Based Transmittance Modeling

The core innovation of MB3DGS is characterizing the density accumulation along a view ray by its statistical moments, sidestepping the need for per-pixel ray tracing or sample sorting. Each camera ray aggregates contributions from overlapping 3D Gaussian primitives into per-pixel moments. These moments are then used to reconstruct a continuous transmittance function describing the medium's permeability to light without imposing any ordering or non-overlap assumptions.

To compute physically accurate radiance, the volume rendering integral is evaluated independently per Gaussian. The method assumes only piecewise-constant density within quadrature intervals, which aligns with the physical basis used in volumetric imaging. Per-Gaussian emission is modeled using spherical harmonics for direction-dependent appearance.

Numerical Stability and Proxy Geometry

A direct computation of high-order statistical moments is numerically unstable for the necessary integration extents and orders. MB3DGS resolves this via a power-transform warping of the domain, together with a closed-form recurrence for moment computation, ensuring stability across relevant scene scales.

For efficient rasterization, the method introduces a confidence-interval-based geometric proxy that tightly bounds each Gaussian’s true radiance-contributing screen region. This proxy is mathematically derived from the projected covariance, mitigating both over- and under-coverage typical in affine projections and improving sampling robustness for anisotropic and near-camera Gaussians.

Differentiable Optimization and Adaptive Densification

The approach supports fully differentiable training, required for optimization-based scene representation. An adjoint rendering pass consolidates gradient computations to handle the custom rasterization and moment computations efficiently. The MB3DGS training objective augments the standard 3DGS loss with a consistency regularizer that penalizes deviations between predicted and analytical transmittance, enforcing physical monotonicity and reducing overfitting.

Adaptive Density Control (ADC) is adapted for view-dependent volumetric opacities, with robust pruning and densification decisions based on the worst-case axis-aligned opacity of each Gaussian.

Experimental Results

MB3DGS is empirically validated on established benchmarks for novel view synthesis, including Mip-NeRF 360, Tanks and Temples, and DeepBlending. It is compared against standard 3DGS, StopThePop, Vol3DGS, and recent volumetric-aware and ray-traced approaches.

Quantitative results indicate that, while MB3DGS produces slightly lower PSNR and SSIM in aggregate compared to alpha-blended baselines, it achieves comparable perceptual (LPIPS) scores and—most importantly—delivers significantly improved reconstruction quality for scenes with complex translucency, color blending, and volumetric occlusion. Qualitative analysis shows substantially improved sharpness and physical plausibility in scenarios involving intersecting Gaussians, specular highlights, and reflections, resolving visual artifacts that persist with prior rasterization-based volumetric extensions such as Vol3DGS.

Ablation studies demonstrate the crucial roles of trigonometric moment integration (which better capture splat ordering), the confidence-interval proxy, and the new ADC strategy. The regularization term substantially aids optimization stability and prevents density overfitting in occluded regions.

Limitations and Future Directions

Although MB3DGS substantially improves volumetric consistency and order-independence over previous splatting approaches, its reconstruction quality is sensitive to the chosen densification hyperparameters and the physical accuracy of camera calibration. Under-challenging scenes with fine, highly-parallaxed features, residual blur and under-reconstruction remain, primarily due to conservative thresholds in adaptive density control. The moment-based continuous transmittance also introduces a potential for over- or underestimation in regions with highly variable density that a finite set of moments cannot fully capture.

Future developments could focus on improved, data-driven or hierarchical densification strategies, further robustness to camera miscalibration, and hybridization with stochastic sampling or sparse ray tracing for highly heterogeneous, non-Lambertian media.

Theoretical and Practical Implications

The main theoretical impact is the bridging of physically-motivated volumetric rendering with the efficiency and parallelism of GPU-accelerated rasterization for explicit point cloud representations. MB3DGS demonstrates that order-independent, physically accurate occlusion and light transport can be achieved without resorting to sample sorting or expensive ray marching, opening avenues for high-fidelity real-time graphics and view synthesis.

Practically, this method enables scalable scene rendering with fine-grained control over efficiency and quality trade-offs, and its differentiable formulation is compatible with end-to-end optimization for neural scene representations, robotics, and real-world AR/VR applications. The reliance on GPU-friendly architectures makes it immediately suitable for deployment in production pipelines demanding both fidelity and performance.

Conclusion

MB3DGS provides a formally grounded, computationally efficient framework for volumetric rendering of point-based scene representations, overcoming fundamental limitations of alpha-blending and order-dependence in 3DGS. By modeling light attenuation via moment-based order-independent transmittance, the method delivers physically accurate appearance of semi-transparent and overlapping geometry. The approach is especially effective for complex visual effects and can serve as a foundation for future advances in physically motivated, real-time novel view synthesis and scene reconstruction (2512.11800).

Whiteboard

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 51 likes about this paper.