Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ZeroGS: Training 3D Gaussian Splatting from Unposed Images (2411.15779v1)

Published 24 Nov 2024 in cs.CV

Abstract: Neural radiance fields (NeRF) and 3D Gaussian Splatting (3DGS) are popular techniques to reconstruct and render photo-realistic images. However, the pre-requisite of running Structure-from-Motion (SfM) to get camera poses limits their completeness. While previous methods can reconstruct from a few unposed images, they are not applicable when images are unordered or densely captured. In this work, we propose ZeroGS to train 3DGS from hundreds of unposed and unordered images. Our method leverages a pretrained foundation model as the neural scene representation. Since the accuracy of the predicted pointmaps does not suffice for accurate image registration and high-fidelity image rendering, we propose to mitigate the issue by initializing and finetuning the pretrained model from a seed image. Images are then progressively registered and added to the training buffer, which is further used to train the model. We also propose to refine the camera poses and pointmaps by minimizing a point-to-camera ray consistency loss across multiple views. Experiments on the LLFF dataset, the MipNeRF360 dataset, and the Tanks-and-Temples dataset show that our method recovers more accurate camera poses than state-of-the-art pose-free NeRF/3DGS methods, and even renders higher quality images than 3DGS with COLMAP poses. Our project page is available at https://aibluefisher.github.io/ZeroGS.

Summary

  • The paper introduces ZeroGS, a novel method that trains 3D Gaussian splatting models from unposed images using a pretrained foundation model.
  • It employs an incremental pipeline with seed initialization, image registration using RANSAC and PnP, and a point-to-camera ray consistency loss for pose and scene refinement.
  • Evaluations on datasets like LLFF, MipNeRF360, and Tanks-and-Temples demonstrate its superior reconstruction quality compared to traditional pose-based and pose-free approaches.

Insightful Overview of "ZeroGS: Training 3D Gaussian Splatting from Unposed Images"

The paper "ZeroGS: Training 3D Gaussian Splatting from Unposed Images" introduces a novel methodology for reconstructing neural scenes from unposed and unordered images, addressing a significant limitation in 3D reconstruction technologies like Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS). This paper leverages a foundation model paradigm to overcome the dependency on Structure-from-Motion (SfM) tools for camera pose estimation, a prominent hurdle in traditional methods that typically rely on tools such as COLMAP for initial pose estimation.

Core Contributions and Methodology

The primary contribution of this work is the ZeroGS method, which is built upon a pretrained foundation model, in this case, based on the architecture of Spann3R derived from DUSt3R—a 3D-vision foundation model. ZeroGS employs this model to predict dense 3D pointmaps and 3D Gaussian primitives directly from image pairs, bypassing the need for initial camera pose estimates. This process inherently decouples camera poses from the initial scene reconstruction, making the technique robust to scenarios with unordered image captures.

The authors propose a sophisticated incremental training pipeline:

  1. Seed Initialization: The process initiates with a single seed image rather than image pairs, differing from traditional incremental SfM methods. This image is selected based on its connectivity to other images in terms of visual similarity, facilitating broader scene coverage.
  2. Image Registration: This step involves using RANSAC and a PnP solver to establish coarse camera poses through direct 2D-3D correspondences derived from the predicted pointmaps. This coarse alignment is then refined using a point-to-camera ray consistency loss, progressively integrating batches of newly registered images.
  3. Refinement and Optimization: The method iteratively refines both camera poses and scene representation, employing a rendering loss for network training and leveraging a novel two-stage strategy to finalize camera poses and enhance the detail of the reconstructed neural scene.

Results and Evaluation

The insightful evaluation of ZeroGS spans three datasets: LLFF, MipNeRF360, and Tanks-and-Temples. The results demonstrate superior camera pose accuracy and image rendering quality when compared to state-of-the-art pose-free techniques for NeRF and 3DGS. Specifically, in several scenarios, ZeroGS not only outperformed pose-free approaches but also surpassed methods reliant on COLMAP-generated poses, highlighting its robustness and effectiveness.

Quantitatively, ZeroGS achieves substantial improvements in pose estimation, reducing both rotation and translation errors across various scenes in the benchmark datasets. Qualitative assessments further corroborate these findings, with ZeroGS delivering high-fidelity novel view synthesis and detailed scene reconstructions even in the absence of predefined pose information.

Implications and Future Directions

This research extends significant implications for both theoretical and practical aspects of 3D scene reconstruction. The decoupling of camera pose estimation from the initial scene reconstruction aligns with the broader trend towards foundation models in computer vision, emphasizing generalizability and robustness. Practically, ZeroGS paves the way for 3D reconstruction from unstructured datasets, such as those captured in consumer-level devices where pose information is often unreliable or absent.

Future work could explore the scalability of this approach to larger, more complex scenes and the integration of additional modalities, such as depth information, to refine the neural representation. Moreover, the potential for real-time or near-real-time applications, leveraging further optimizations and efficiency improvements, presents intriguing avenues for subsequent research.

In summation, the ZeroGS paper provides a compelling advancement in 3D reconstruction, showcasing how foundation models can be utilized to eliminate traditional dependencies on precise camera pose acquisition, thus promoting flexibility and accessibility in 3D modeling applications.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub

X Twitter Logo Streamline Icon: https://streamlinehq.com