RBF Modeling in Dynamic Scene Representations
- RBF Modeling is a technique that uses localized basis functions, typically Gaussians, to efficiently represent dynamic scene parameters.
- It underpins Hybrid Gaussian Splatting by selectively applying temporal parameterization, capturing both abrupt and smooth changes.
- The method achieves impressive efficiency through static-dynamic decomposition, reducing model size by up to 98% while enhancing rendering fidelity.
Radial Basis Function (RBF) Modeling
Radial Basis Function (RBF) modeling is a technique for representing time-dependent or spatially varying parameters using a set of localized basis functions, each typically defined as a Gaussian or similar kernel. Within the context of modern explicit scene representations and, specifically, Hybrid Gaussian Splatting (HGS) for dynamic view synthesis, RBF modeling serves as the foundational approach for compact and differentiable temporal parameterization of dynamic scene components (Zhang et al., 16 Dec 2025). This paradigm addresses the need for efficient, high-fidelity dynamic scene modeling under real-time constraints, avoiding the parameter bloat and inefficiency observed in previous per-frame explicit models.
1. Mathematical Fundamentals of RBF Modeling
In RBF modeling, a function —for instance, a motion trajectory or scalar attribute with temporal variation—is expressed as a weighted sum of basis functions:
where the basis functions are localized in the domain (commonly, for Gaussians centered at with width ), and are learnable weights. In explicit dynamic scene representations, each primitive (e.g., a 3D Gaussian) maintains a compact set of RBF coefficients to control its time-varying attributes such as centroid , rotation , or opacity .
2. Application in Hybrid Gaussian Splatting (HGS)
The principal challenge for dynamic scene models is to capture both abrupt and smooth temporal variations while avoiding redundant parameterization in static regions. In HGS, RBF modeling is applied exclusively to dynamic primitives, encoding their temporal evolution as follows (Zhang et al., 16 Dec 2025):
- Center trajectory: ,
- Rotation: ,
- Opacity: ,
where and are polynomial coefficients (interpreted as RBF-weighted control points in the general case), is the temporal anchor for primitive , and governs temporal scale. This parameterization is functionally equivalent to an RBF expansion with Gaussian windows localized in time, facilitating both smooth interpolation and representation of abrupt changes by proper placement and width of basis functions.
By restricting the use of RBFs to truly dynamic elements, HGS achieves dramatic reductions in model size—up to 98% compared to models that assign temporal parameters to all primitives—and completely eliminates redundancy in static regions, which revert to time-invariant parameter storage.
3. Static-Dynamic Decomposition and Redundancy Elimination
Crucial to the HGS RBF modeling strategy is the Static-Dynamic Decomposition (SDD), which dichotomizes scene primitives into static (no temporal variation) and dynamic (RBF-controlled temporal evolution) subsets. For static Gaussians, all temporal coefficients are either zeroed or shared globally:
- for ,
- ,
- ,
- Only are retained per static primitive,
- One (shared) zeroed set of RBF parameters is serialized for all static primitives.
The outcome is maximum storage efficiency in static regions and minimal parameters needed for high temporal complexity in dynamic regions. This stands in marked contrast to prior 4DGS/STGS frameworks, which assign time-varying parameters indiscriminately, leading to temporal blur and model bloat (Zhang et al., 16 Dec 2025).
4. Training Strategies and Boundary Coherence
HGS introduces a two-stage training regimen to handle static-dynamic boundaries, where temporal coherence and suppression of flicker are critical:
- Static Optimization Stage: Dynamic parameters frozen; static parameters updated via gradients of photometric and structural losses.
- Dynamic Optimization Stage: Static parameters frozen; dynamic (RBF) parameters updated analogously.
This alternation ensures consistent optimization at static-dynamic interfaces, mitigates artifacts, and maintains high-frequency detail in both temporal and spatial domains. RBF modeling is thus tightly integrated into the objective function, allowing explicit and differentiable manipulation of both static and dynamic elements during learning (Zhang et al., 16 Dec 2025).
5. Quantitative Performance and Model Efficiency
Empirical evidence demonstrates that RBF modeling of dynamic parameters within HGS yields best-in-class tradeoffs between speed, storage, and fidelity:
| Dataset | PSNR (dB) | SSIM | Model Size | Rendering FPS (@4K RTX 3090) | Relative Size Reduction |
|---|---|---|---|---|---|
| Neural 3D Video Dataset | 32.36 | 0.952 | 6.87 MB | 125 | up to 98% |
| Google Immersive | 29.60 | 0.925 | 12.7 MB | >300 | up to 98% |
Compared to NeRF-based or all-explicit dynamic Gaussian methods, HGS with RBF modeling provides equivalent or higher temporal fidelity and visible improvements in reproducing fast-changing details and boundaries (Zhang et al., 16 Dec 2025).
6. Relation to Other RBF and Dynamic Parameterization Approaches
RBF modeling in dynamic geometry has precedent in the broader literature, but HGS exemplifies a practical and principled application for explicit, temporally-conditioned splatting. Other works (e.g., STGS) use polynomial or RBF interpolation for all primitives, but the absence of SDD gives rise to superfluous parameters and temporal blurring of static content (Zhang et al., 16 Dec 2025). The selective assignment in HGS yields a superior balance between flexibility and efficiency.
7. Impact and Broader Significance
RBF modeling for time-conditioned parameterization in HGS has significant implications for the field of dynamic view synthesis and real-time rendering. By enabling abrupt temporal change modeling with minimal parameters and by sharply delineating static from dynamic content, RBF modeling allows explicit Gaussians to serve as a viable alternative to implicit neural fields for dynamic scene synthesis—crucial for VR, AR, and resource-constrained real-time systems (Zhang et al., 16 Dec 2025).
RBFs, due to their smoothness, compact localization, and differentiable structure, are likely to remain central in future work on temporally adaptive, efficient dynamic scene representations. Possible future research includes adaptive selection of RBF centers/widths, hierarchical time-frequency RBFs, or coupling with low-rank temporal models for further compression or expressiveness.