Universal Beta Splatting (UBS)
- Universal Beta Splatting is a rendering paradigm that replaces fixed Gaussian splats with learnable N-dimensional Beta kernels to adaptively model spatial, angular, and temporal variations.
- Its Beta-modulated conditional slicing technique enables dynamic scene adjustments without auxiliary networks, ensuring efficient view-dependent and time-varying appearance.
- UBS maintains backward compatibility with legacy methods while significantly reducing training time and enhancing rendering quality, as evidenced by improved PSNR benchmarks.
Universal Beta Splatting (UBS) is a unified explicit radiance field rendering paradigm that generalizes conventional 3D Gaussian Splatting (3DGS) by introducing N-dimensional anisotropic Beta kernels. These kernels offer adaptive dependency modeling across spatial, angular, and temporal dimensions, facilitating superior handling of complex light transport, view-dependent appearance, and dynamic scene content. UBS eliminates the need for auxiliary networks or specialized color encodings, achieves real-time rendering performance via CUDA acceleration, and maintains backward compatibility with Gaussian-based splatting methods, establishing Beta kernels as a scalable universal primitive for radiance field rendering (Liu et al., 30 Sep 2025).
1. N-Dimensional Beta Kernel Framework
UBS replaces the fixed, symmetric Gaussian primitive in 3DGS with an N-dimensional, learnable Beta kernel. Each kernel is parameterized by a per-dimension shape parameter controlling the Beta exponent . The full kernel over dimensions is
where is a bounded distance measure in . The N-dimensional mean and covariance are partitioned into spatial and query (e.g., view, time) subspaces:
This parameterization enables per-dimension, interpretable control of kernel sharpness and support, allowing kernels to simultaneously model extended surfaces, fine texture, highly localized specular highlights, or dynamic appearance.
2. Beta-Modulated Conditional Slicing
To produce a spatial kernel suitable for rendering, UBS uses Beta-modulated conditional slicing, conditioning the N-dimensional kernel on non-spatial query variables (e.g., view direction, time). For query ,
This conditioning allows spatial appearance to adapt in response to view or temporal changes, subsuming explicit spherical harmonic color encoding and dynamic-appearance networks. A product-form opacity gate further allows for explicit control of density in the rendered kernel:
3. Backward Compatibility with Gaussian Splatting
UBS is designed so that setting all —yielding —recovers the Gaussian kernel profile, i.e., closely approximates . This ensures that UBS can be deployed as a direct drop-in replacement for legacy 3DGS (and extensions such as 6DGS, 7DGS), inheriting its lower performance bounds while offering substantial adaptive improvements when the Beta parameters are learned (Liu et al., 30 Sep 2025).
4. Expressiveness, Decomposition, and Interpretability
The independent shape control over spatial, angular, and temporal dimensions allows Beta kernels to decompose scene properties without explicit supervision:
- Spatial shape parameters separate coarse geometry (surfaces, with flat kernel response) from fine texture (sharp, peaked response).
- Angular parameters distinguish diffuse (broad) from specular (localized) appearance.
- Temporal parameters enable discrimination of static content (broad, unvarying support) from dynamic elements (localized, time-dependent support).
This decomposition facilitates post-hoc editing (relighting, motion adjustment) and supports interpretability in downstream analysis, such as semantic segmentation or object tracking.
5. Performance Characteristics and Implementation
The UBS framework includes a fully CUDA-accelerated pipeline for evaluation and optimization of Beta kernels, including fused kernels for conditional slicing and spatial-orthogonal Cholesky operations. The parameter efficiency is enhanced relative to separate Gaussian+SH representations, as both geometry and appearance are handled within the unified Beta kernel:
- Static scene training: UBS-6D achieved up to a 69% reduction in training time on challenging datasets compared to Gaussian baselines.
- Dynamic scenes: UBS-7D realized a 48.7% reduction in training time versus 7DGS, with real-time rendering performance and up to +8.27 dB PSNR improvement in select benchmarks.
- Interactive applications: Rendering framerates were improved by ~26% in some settings due to aggressive kernel fusion and parameter compression.
6. Relation to Deformable Beta Splatting and Kernel-Agnostic Optimization
UBS builds directly on the principles introduced in Deformable Beta Splatting (DBS) (Liu et al., 27 Jan 2025), inheriting the benefits of compact, bounded-support kernels and adaptive frequency control. The kernel-agnostic MCMC optimization strategy, previously demonstrated on DBS, is applicable to the UBS formulation: as opacity is regularized and sufficiently small, distributional preservation under densification is guaranteed, independent of the specific kernel form. This result underpins the universal applicability of Beta-based splatting, including for compression, densification, and integration of confidence scores via learnable Beta distributions (Razlighi et al., 28 Jun 2025).
7. Applications and Future Directions
UBS has demonstrated state-of-the-art performance in static, view-dependent, and dynamic scene rendering on standardized benchmarks (NeRF Synthetic, Mip-NeRF360, 6DGS-PBR). Its extensible N-dimensional formulation suggests future research avenues:
- Integrating additional modalities (lighting, material properties) through further query dimensions.
- Leveraging interpretability for graphics editing (relighting, material adjustment, motion manipulation).
- Enhancing hardware acceleration with bespoke kernel fusion strategies.
- Using learned Beta kernel parameters for downstream vision tasks.
A plausible implication is that UBS may serve as a universal explicit primitive for high-fidelity, efficient radiance field modeling, replacing legacy Gaussian methods across diverse rendering contexts and enabling rich scene decomposition via interpretable kernel parameterization (Liu et al., 30 Sep 2025).