- The paper introduces a framework analyzing 3D Gaussian Splatting (3DGS) approximations, finding opacity-based splatting superior to extinction methods with many primitives.
- Experiments show that 3DGS rendering approximations, including simplified sorting and self-attenuation, have negligible visual impact with a high number of Gaussians.
- The findings imply that 3DGS approximations work well because a high number of primitives offers sufficient expressiveness, reducing the need for strictly accurate volumetric rendering.
The paper "Does 3D Gaussian Splatting Need Accurate Volumetric Rendering?" analyzes the approximations made by 3D Gaussian Splatting (3DGS) for real-time novel view synthesis, contrasting them with the principled volumetric rendering of Neural Radiance Fields (NeRFs). It introduces a mathematical framework to clarify the differences between 3DGS and accurate volumetric rendering, focusing on opacity versus extinction-based rendering. The paper presents extinction-based splatting and ray-marching algorithms for Gaussian primitives and evaluates the impact of 3DGS approximations on visual quality and performance.
The authors clarify the distinction between the learned opacity value in 3DGS and the extinction function used in volumetric rendering, where extinction is referred to as "density" in NeRF literature. To facilitate analysis, an extinction-based splatting solution is introduced. Experiments indicate that the extinction-based solution performs better with a small number of primitives, but this reverses as the number of primitives increases, with opacity splatting performing best. This suggests that as the number of Gaussians increases, rendering them with 3DGS becomes as expressive as volumetric rendering.
The paper notes that 3DGS resolves visibility through a single global sorting step based on Gaussian centers, which is an approximation that causes popping artifacts. Spatial overlap of Gaussians is ignored, which deviates from the volumetric rendering integral. A ray-marching algorithm on 3D Gaussians is implemented to paper the impact of this approximation, revealing that these approximations have a negligible impact on still images, especially with a large number of Gaussians.
Other approximations made by 3DGS, such as incorrect treatment of self-attenuation and approximate screen-space shape projection, are also shown to have little impact on the effectiveness of 3DGS. The key contributions of the paper include:
- A mathematical framework clarifying the differences between 3DGS and accurate volumetric rendering.
- Introducing extinction-based splatting and ray-marching algorithms for Gaussian primitives, along with a closed-form solution for splatting self-attenuated Gaussians.
- Demonstrating that opacity-based splatting results in lower error compared to extinction-based methods when using a sufficiently high number of primitives.
- Showing that for a low number of Gaussians, correct overlap resolution and extinction-based rendering improves image quality, while correct sorting does not significantly affect results.
Mathematical Framework
The paper revisits the volumetric rendering integral:
I(p)=∫0∞c(r,t)f(r(t))e−∫0tf(r(τ))dτdt
Where:
- I(p) is the image function, parameterized by pixel p.
- c(r,t) is the radiance at r(t) in the direction of ray r.
- f(r(t)) is the extinction coefficient at r(t).
- r is the viewing ray, parameterized by distance t.
This integral models direct volume rendering with attenuation and source terms. The paper then specializes this for a Gaussian representation of the extinction function. It uses both normalized Gaussian functions:
GDn(x,w,μ,Σ)=wND(x;μ,Σ)
Where:
- GDn is the D-dimensional normalized Gaussian function.
- x is a point in RD.
- w is a weight parameter.
- μ is the D-dimensional position (mean).
- Σ is the shape (covariance matrix).
- ND is the normal distribution's PDF.
And unnormalized Gaussian functions:
GDu(x,a,μ,Σ)=aID(Σ)ND(x;μ,Σ)
Where:
- GDu is the D-dimensional unnormalized Gaussian function.
- a is the amplitude.
- ID(Σ) is the normalization factor for the exponential part of a D-dimensional normalized Gaussian function.
The extinction function is modeled by a mixture of Gaussians:
f(x)=i=0∑NG3n(x,wi,μi,Σi)
EWA and 3D Gaussian Splatting
The paper details that to avoid the high cost of volume integration, both Elliptical Weighted Average (EWA) and 3DGS simplify the rendering of 3D Gaussians by reducing them to 2D Gaussians that can be easily "splatted."
EWA exploits simplifications to find the 2D extinction contribution function fi of Gaussian i from its 3D definition:
fi(p)=G2n(p,wi,μi′,Σi′)=∫−∞∞G3n(r(t),wi,μi,Σi)dt
Where:
- μ′ and Σ′ are projected 2D mean and covariance matrix.
In contrast, 3DGS uses unnormalized Gaussians and preserves 2D amplitude a′ across all projections:
oi(p)=G2u(p,ai′,μi′,Σi′)
The computation of Σ′ involves transforming the Gaussian from world-space coordinates to screen space, approximated using a locally-affine counterpart:
Σ′=JWΣWTJT
Where:
- J is the Jacobian matrix.
- W is the transformation to camera space.
The attenuation term is approximated by the first-order Taylor expansion of ex, resulting in the image function:
I(p)=i=0∑Nci(r)gi(p)j=0∏i−1(1−gj(p))+cbi=0∏N(1−gi(p))
Where:
- ci is an evaluation of the spherical harmonics in the viewing direction.
- cb is the background color.
- gi is the i-th Gaussian's partial contribution, either extinction or opacity.
Analysis of 3DGS Representation and Approximations
The paper analyzes the key difference between EWA and 3DGS, i.e., the use of 2D opacity instead of extinction-based values. It introduces a unified framework for computing Gaussian-based extinction functions across EWA and 3DGS, using an abstract data term θ to derive the appearance of each Gaussian.
For EWA Splatting, the stored per-Gaussian data term θ corresponds to w, the total integral of each normalized Gaussian function. The unnormalized Gaussian amplitudes are:
a=I3(Σ)θ
a′=I2(Σ′)θ
3D Gaussian Splatting stores an "opacity" term on the 3D primitives, which is a constant, view-independent quantity for amplitude a′ of projected Gaussians in 2D:
a′=θ
The view-dependent solution for a in 3D can be recovered from θ:
w=I2(Σ′)θ
a=I3(Σ)w=I3(Σ)I2(Σ′)θ
Optimization with EWA-based Extinction
The paper aims to adapt EWA-based splatting for gradient-descent-based optimization. To ensure robustness under optimization and the ability to model thin, solid objects, the paper arrives at a scheme called opacity-thin-side (OTS). OTS dynamically scales the learned weight θ such that a′=θ when looking at the Gaussian facing its thinnest side:
a=θI3(Σ)I2∗(Σ)
a′=θI2(Σ′)I2∗(Σ)
Where:
- I2∗ is the largest possible I2.
Attenuation and Self-Attenuation
Both EWA splatting and 3DGS ignore how a Gaussian's extinction affects its own appearance, referred to as self-attenuation. To address attenuation in a principled manner, the paper revisits the volumetric integration equation for a Gaussian mixture with just one Gaussian:
I(p)=c0(r)∫−∞∞G3n(r(t),0)e−∫−∞tG3n(r(τ),0)dτdt
The closed-form solution is:
I(p)=c0(r)(1−e−f0(p))
This solution extends to multiple Gaussians:
I(p)=i=0∑Nci(r)(1−e−fi(p))j=0∏i−1(e−fj(p))+cbi=0∏N(e−fi(p))
Visibility
The paper addresses the issues of Gaussian overlap and depth sorting. To assess the importance of exact visibility for scene reconstruction and novel-view synthesis, a principled ray marching-based renderer for 3D Gaussians is designed.
Evaluation
Numerical experiments are conducted to determine the impact of approximations on reconstruction quality, using the NeRF synthetic dataset and additional volumetric datasets. The evaluation uses image quality metrics such as SSIM, PSNR, and LPIPS.