Papers
Topics
Authors
Recent
Search
2000 character limit reached

2DGS-R: Revisiting the Normal Consistency Regularization in 2D Gaussian Splatting

Published 19 Oct 2025 in cs.CV | (2510.16837v1)

Abstract: Recent advancements in 3D Gaussian Splatting (3DGS) have greatly influenced neural fields, as it enables high-fidelity rendering with impressive visual quality. However, 3DGS has difficulty accurately representing surfaces. In contrast, 2DGS transforms the 3D volume into a collection of 2D planar Gaussian disks. Despite advancements in geometric fidelity, rendering quality remains compromised, highlighting the challenge of achieving both high-quality rendering and precise geometric structures. This indicates that optimizing both geometric and rendering quality in a single training stage is currently unfeasible. To overcome this limitation, we present 2DGS-R, a new method that uses a hierarchical training approach to improve rendering quality while maintaining geometric accuracy. 2DGS-R first trains the original 2D Gaussians with the normal consistency regularization. Then 2DGS-R selects the 2D Gaussians with inadequate rendering quality and applies a novel in-place cloning operation to enhance the 2D Gaussians. Finally, we fine-tune the 2DGS-R model with opacity frozen. Experimental results show that compared to the original 2DGS, our method requires only 1\% more storage and minimal additional training time. Despite this negligible overhead, it achieves high-quality rendering results while preserving fine geometric structures. These findings indicate that our approach effectively balances efficiency with performance, leading to improvements in both visual fidelity and geometric reconstruction accuracy.

Summary

  • The paper presents a novel hierarchical training strategy that decouples appearance and geometry optimization using normal consistency regularization.
  • It demonstrates that targeted attribute refinement and in-place clone densification yield a 46% F-score improvement with minimal computational overhead.
  • The approach effectively balances high-quality rendering with precise surface reconstruction across multiple datasets using competitive metrics.

Revisiting Normal Consistency Regularization in 2D Gaussian Splatting: The 2DGS-R Framework

Introduction

The paper "2DGS-R: Revisiting the Normal Consistency Regularization in 2D Gaussian Splatting" addresses the persistent challenge in neural field-based scene representations: achieving high-fidelity rendering while maintaining precise geometric reconstruction. While 3D Gaussian Splatting (3DGS) has demonstrated real-time rendering capabilities, its surface definition is ambiguous, complicating mesh extraction. 2D Gaussian Splatting (2DGS) improves geometric fidelity by projecting volumetric data onto 2D planar Gaussian disks, but this comes at the cost of reduced rendering quality. The core contribution of 2DGS-R is a hierarchical training strategy that leverages normal consistency regularization (NC) and targeted attribute refinement to balance rendering and geometric accuracy with minimal computational overhead. Figure 1

Figure 1: Motivation for 2DGS-R—simultaneous high-quality rendering and precise surface reconstruction via normal consistency regularization.

Methodology

2DGS Preliminaries and Loss Formulation

2DGS parameterizes each Gaussian by its central position μ\mu, scaling vector S\boldsymbol{S}, and rotation matrix R\boldsymbol{R}, enabling a transformation from UV space to world space. The rendering pipeline involves projecting rays from the camera through pixels and computing intersections in the Gaussian's local coordinates. The loss function integrates photometric loss Lc\mathcal{L}_c, depth distortion Ld\mathcal{L}_d, and normal consistency Ln\mathcal{L}_n:

L=Lc+αLd+βLn\mathcal{L} = \mathcal{L}_c + \alpha\mathcal{L}_d + \beta\mathcal{L}_n

Normal consistency regularization aligns the normals of splats with those estimated from the depth map, enforcing geometric coherence but often degrading rendering quality.

Analysis of Normal Consistency Impact

Empirical analysis reveals that increasing the weight of Ln\mathcal{L}_n improves geometric reconstruction (F-score up by 46%) but reduces rendering quality (PSNR down by 0.8 dB). The introduction of NC increases both the spatial coverage KaK_a and opacity α\alpha of Gaussians, which benefits geometry but harms appearance. Figure 2

Figure 2: NC increases KaK_a and opacity, improving reconstruction but degrading rendering on the Tanks and Temples dataset.

Hierarchical Training Strategy

2DGS-R introduces a three-stage pipeline:

  1. Stage 1: Train 2DGS with NC to obtain well-distributed Gaussians in space.
  2. Stage 2: Identify high-error Gaussians (HEGs) via a per-Gaussian color error metric EiE_i (see Eq. 10). Freeze all attributes of low-error Gaussians (LEGs); for HEGs, only SH coefficients are trainable. Fine-tune appearance without NC.
  3. Stage 3: In-place clone densification: clone HEGs to increase modeling capacity in regions with high color variation. For cloned Gaussians, fine-tune SHs and Σ\Sigma while freezing opacity (α\alpha) to decouple appearance adaptation from geometry refinement. Figure 3

    Figure 3: Geometric properties (μ\mu, Σ\Sigma) and appearance attributes (α\alpha, SHs) in 2DGS.

    Figure 4

    Figure 4: Computation of per-Gaussian error EiE_i for targeted refinement.

    Figure 5

    Figure 5: Rendering process of a single Gaussian and the effect of NC on color and normal.

This staged approach mitigates the trade-off between rendering and geometry, with only a 1% increase in storage and negligible additional training time.

Experimental Results

Quantitative and Qualitative Evaluation

2DGS-R is evaluated on NeRF-Synthetic, Tanks and Temples (TnT), and DTU datasets. Metrics include PSNR, SSIM, LPIPS for rendering, and F-score or Chamfer Distance for geometry. Compared to baseline 2DGS and other state-of-the-art methods, 2DGS-R achieves:

  • Rendering: PSNR improvement (e.g., 24.73 vs. 24.30 on TnT), SSIM and LPIPS gains.
  • Geometry: F-score and Chamfer Distance on par or better than 2DGS with NC, while avoiding the rendering degradation typical of strong geometric regularization. Figure 6

    Figure 6: Mesh normal comparison on NeRF Synthetic—2DGS-R reconstructs more complete and detailed scenes.

    Figure 7

    Figure 7: Novel view synthesis on TnT—2DGS-R recovers fine details and reduces artifacts in low-texture regions.

    Figure 8

    Figure 8: Geometric reconstruction on DTU—2DGS-R yields more accurate meshes, especially in challenging regions.

    Figure 9

    Figure 9: Geometric reconstruction on TnT—2DGS-R produces smoother results than direct NC application.

Ablation Studies

  • Clone Densification: In-place cloning of HEGs is more robust than gradient-based densification (e.g., AbsGS), which is sensitive to hyperparameters and can destabilize geometry.
  • Freeze Opacity: Freezing α\alpha during fine-tuning yields the best rendering-geometric trade-off.
  • K Parameter: Increasing the number of cloned Gaussians (KK) beyond 1% does not yield significant improvements, indicating the efficiency of the targeted approach.

Implications and Future Directions

2DGS-R demonstrates that decoupling appearance and geometry optimization in neural field representations is feasible and effective. The hierarchical strategy enables high-fidelity rendering and accurate surface reconstruction with minimal overhead. This approach is particularly relevant for applications requiring both photorealistic visualization and precise 3D modeling, such as AR/VR, robotics, and digital heritage.

Theoretically, the work suggests that the conflict between rendering and geometry in neural fields can be mitigated by attribute-specific regularization and targeted model capacity enhancement. Practically, the pipeline is compatible with existing 2DGS frameworks and can be extended to other Gaussian-based representations.

Future research may explore adaptive strategies for attribute selection, integration with dynamic scene modeling, and further reduction of computational costs. The methodology could also be generalized to other neural field architectures where appearance-geometry trade-offs are present.

Conclusion

2DGS-R provides a principled solution to the rendering-geometry trade-off in 2D Gaussian Splatting by introducing a multi-stage training pipeline that leverages normal consistency regularization, targeted attribute refinement, and in-place clone densification. The method achieves competitive results in both rendering and geometric reconstruction with minimal resource overhead, offering a practical and theoretically sound framework for high-quality neural scene representations.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 57 likes about this paper.