FastGS: Accelerated 3D Gaussian Splatting
- FastGS is an acceleration framework for 3D Gaussian Splatting that adaptively adjusts Gaussian counts based on multi-view consistency to significantly reduce training time.
- It integrates multi-view consistent densification and pruning modules, optimizing Gaussian placements during training to maintain or improve PSNR and SSIM.
- The framework achieves 2–15× speedup across static, dynamic, and SLAM applications, demonstrating broad applicability and high-quality rendering.
FastGS is an acceleration framework for 3D Gaussian Splatting (3DGS) that adaptively regulates the number and placement of Gaussians during training based on multi-view consistency metrics, thereby achieving a substantial reduction in training time without compromising final rendering quality. Unlike prior approaches that rely on fixed budgeting or simplistic heuristics for densification and pruning, FastGS introduces a dynamic method that utilizes multi-view photometric error for both adding and removing Gaussian primitives. This framework achieves up to 15.45× acceleration over baseline 3DGS, delivers equivalent or improved PSNR and SSIM scores, and is broadly applicable across static reconstruction, dynamic scenes, surface modeling, large-scale geometry, and SLAM.
1. Methodological Foundations
FastGS operates within the standard 3DGS workflow but introduces two principal modules: multi-view consistent densification (VCD) and multi-view consistent pruning (VCP). The process can be summarized as follows:
- Initialization: Multi-view images and an SfM point cloud, e.g., from COLMAP, are used to instantiate an initial set of anisotropic 3D Gaussians. Each primitive is parameterized by mean , rotation (quaternion), scale (for covariance), opacity , and SH color coefficients .
- Iterative Training: For steps:
- The 3DGS rasterizer produces rendered images , which are compared to ground truth to compute the loss .
- Densification (VCD) is executed every steps (until ), while pruning (VCP) is performed on a complementary or continuous schedule.
Multi-View Consistency Densification (VCD)
VCD identifies 3D Gaussians that persistently correspond to high-error regions across multiple training views:
- For view at a pixel :
(where is the color channel count).
- Errors are normalized to , and a binary mask is computed by thresholding normalized error .
- Each Gaussian is projected into all sampled training views; its densification score is the mean count of mask-activated pixels within its projected footprint :
- Gaussians with are split, using the local principal axes for geometry-aware cloning.
Multi-View Consistency Pruning (VCP)
VCP removes superficially or redundantly contributing Gaussians:
- For each view , the global photometric loss is .
- The pruning score is
where denotes a min-max normalization over all .
- When , the corresponding primitive is pruned.
High-level Pseudocode
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
Input: G (Gaussians), {G_j} (images), {v_j} (cameras), K, τ₊, τ₋, D₊, D₋, T
for t in 1..T:
render G → R
loss = compute_photometric_loss(R, G)
optimize G wrt loss
if t ≤ 15000 and t % D₊ == 0:
for each Gaussian 𝒢ᵢ:
if densify_score(𝒢ᵢ) > τ₊:
split(𝒢ᵢ)
if t % D₋ == 0:
for each Gaussian 𝒢ᵢ:
if prune_score(𝒢ᵢ) > τ₋:
prune(𝒢ᵢ) |
2. Theoretical Properties
The principal theoretical claim is that training time scales proportionally to the number of active Gaussians due to the dependence of both forward and backward 3DGS passes on . By enforcing tight multi-view consistency in densification, FastGS ensures that throughout training; consequently,
This empirical scaling is supported by the observed 2–15 training time reductions. The pruning regime is conservative: only Gaussians associated with demonstrable multi-view photometric error are removed, preserving rendering quality to within 0.2 dB PSNR.
No closed-form convergence proof is provided; the schedule is justified empirically by ablation studies on diverse datasets.
3. Performance Evaluation and Empirical Results
FastGS is validated on Mip-NeRF 360, Tanks & Temples, Deep Blending, and additional dynamic and SLAM tasks. Metrics include PSNR, SSIM, LPIPS for visual quality, along with total training time, final count of Gaussians, and FPS for inference throughput.
Summary of Quantitative Results
| Dataset | Method | Training Time (min) | PSNR | # Gaussians (M) | Speedup |
|---|---|---|---|---|---|
| Mip-NeRF 360 | 3DGS-accel | 10.94 | 27.52 | 2.63 | – |
| DashGaussian | 6.38 | 27.73 | 2.40 | 1.8× | |
| FastGS | 1.92 | 27.54 | 0.38 | 5.7× | |
| Deep Blending | 3DGS-accel | 8.87 | 29.74 | – | – |
| 3DGS | 19.77 | – | – | – | |
| FastGS | 1.28 | 30.03 | – | 15.45× | |
| Tanks & Temples | 3DGS-accel | 6.96 | 23.85 | – | – |
| FastGS | 1.32 | 24.15 | – | 5.3× |
Ablation studies further show that VCD alone yields a 3.0× reduction in training time, VCP alone a 1.95× reduction, and the combined method fully realizes the observed speedups with minimal effect on visual metrics.
4. Generality and Applicability
FastGS functions as a plug-in algorithm, compatible with any 3DGS-based pipeline. The observed accelerations are not limited to static reconstruction but extend to:
- Static pipelines: 3DGS-accel, Mip-Splatting, Scaffold-GS.
- Dynamic scene modeling: Deformable-3DGS on NeRF-DS, Neu3D.
- Surface mesh reconstruction: Plugged into PGSR on Tanks & Temples and Mip-NeRF 360, preserving F1 scores at 2–6× reduced training time.
- Sparse-view and large-scale scenes: DropGaussian (sparse-view), Octree-GS (urban/large indoor), with 3–4× acceleration.
- Simultaneous localization and mapping: Photo-SLAM—5× faster at same localization accuracy.
These results indicate broad transferability of the core criteria based on view-consistent error occupancy.
5. Implementation Insights
FastGS maintains compatibility with standard 3DGS rasterizers (tile-based splatting, -blending), leveraging photometric loss (, SSIM), SH-based color, and Adam optimization throughout. The only modifications are the VCD/VCP routines, which operate outside the per-ray rendering kernel.
Densification and pruning are scheduled at configurable iteration intervals. Optimal settings (, tuned on held-out data, ) are task-agnostic and require minimal adjustment across application domains.
Final rendering proceeds as in standard 3DGS, freezing the learned Gaussian cloud. Memory and compute resource usage are directly proportional to the intermediate , leading to a reduction in both hardware and run-time demands.
6. Comparative Perspective
Unlike prior methods that employ fixed-budgeting, arbitrary splitting heuristics, or manual Gaussian culling, FastGS links all densification and pruning to explicit, aggregated photometric error maps across training views. This effectively eliminates both unnecessary densification (over-segmentation) and suboptimal pruning (removal of beneficial primitives). The method dispenses with auxiliary scheduling mechanisms, relying solely on multi-view error statistics. Ablation reveals that the combined multi-view strategy dominates other optimization and culling heuristics in efficiency.
7. Limitations and Extensions
FastGS does not provide a convergence guarantee in theoretical terms. The efficacy of the VCD and VCP thresholds is empirically tuned and may need adjustment for niche modalities (e.g., non-Lambertian surfaces or highly sparse camera trajectories). The reliance on photometric loss maps presumes reasonable multi-view calibration and geometric alignment.
Potential future directions, suggested by the architecture, include coupling view-consistent error-based scheduling with adaptive learning rates, further generalization to other non-splat primitive sets, and possible extension to in-situ (online) scene refinement for robotics and mapping applications.
In summary, FastGS demonstrates that tightly integrating multi-view photometric error analysis with adaptive primitive scheduling delivers order-of-magnitude gains in 3DGS training speed, substantial memory and compute savings, and cross-domain generality, all while maintaining or exceeding established standards of rendering fidelity (Ren et al., 6 Nov 2025).