Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images (2410.24207v1)

Published 31 Oct 2024 in cs.CV

Abstract: We introduce NoPoSplat, a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from \textit{unposed} sparse multi-view images. Our model, trained exclusively with photometric loss, achieves real-time 3D Gaussian reconstruction during inference. To eliminate the need for accurate pose input during reconstruction, we anchor one input view's local camera coordinates as the canonical space and train the network to predict Gaussian primitives for all views within this space. This approach obviates the need to transform Gaussian primitives from local coordinates into a global coordinate system, thus avoiding errors associated with per-frame Gaussians and pose estimation. To resolve scale ambiguity, we design and compare various intrinsic embedding methods, ultimately opting to convert camera intrinsics into a token embedding and concatenate it with image tokens as input to the model, enabling accurate scene scale prediction. We utilize the reconstructed 3D Gaussians for novel view synthesis and pose estimation tasks and propose a two-stage coarse-to-fine pipeline for accurate pose estimation. Experimental results demonstrate that our pose-free approach can achieve superior novel view synthesis quality compared to pose-required methods, particularly in scenarios with limited input image overlap. For pose estimation, our method, trained without ground truth depth or explicit matching loss, significantly outperforms the state-of-the-art methods with substantial improvements. This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios. Code and trained models are available at https://noposplat.github.io/.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (54)
  1. Map-free visual relocalization: Metric pose relative to a single image. In ECCV, 2022.
  2. Nope-nerf: Optimising neural radiance field with no pose prior. In CVPR, 2023.
  3. pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. In CVPR, 2024.
  4. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In ICCV, 2021.
  5. Tensorf: Tensorial radiance fields. In ECCV, 2022.
  6. Yu Chen and Gim Hee Lee. Dbarf: Deep bundle-adjusting generalizable neural radiance fields. In CVPR, 2023.
  7. Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. arXiv preprint arXiv:2403.14627, 2024.
  8. Gaussian activated neural radiance fields for high fidelity reconstruction and pose estimation. In ECCV, 2022.
  9. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017.
  10. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
  11. Learning to render novel views from wide-baseline stereo pairs. In CVPR, 2023.
  12. Roma: Robust dense feature matching. In CVPR, 2024.
  13. Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds. arXiv preprint arXiv:2403.20309, 2024.
  14. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981.
  15. K-planes: Explicit radiance fields in space, time, and appearance. In CVPR, 2023.
  16. Colmap-free 3d gaussian splatting. In CVPR, 2024.
  17. Cat3d: Create anything in 3d with multi-view diffusion models. In NeurIPS, 2024.
  18. Multiple View Geometry in Computer Vision. Cambridge university press, 2003.
  19. Masked autoencoders are scalable vision learners. In CVPR, 2022.
  20. Unifying correspondence pose and nerf for generalized pose-free novel view synthesis. In CVPR, 2024.
  21. Large scale multi-view stereopsis evaluation. In CVPR, 2014.
  22. Geonerf: Generalizing nerf with geometry priors. In CVPR, 2022.
  23. 3d gaussian splatting for real-time radiance field rendering. ACM TOG, 2023.
  24. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM TOG, 2017.
  25. Grounding image matching in 3d with mast3r. arXiv preprint arXiv:2406.09756, 2024.
  26. Barf: Bundle-adjusting neural radiance fields. In ICCV, 2021.
  27. Dl3dv-10k: A large-scale scene dataset for deep learning-based 3d vision. In CVPR, 2024.
  28. Infinite nature: Perpetual view generation of natural scenes from a single image. In ICCV, 2021.
  29. Decoupled weight decay regularization. In ICLR, 2018.
  30. Gaussian splatting slam. In CVPR, 2024.
  31. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  32. Instant neural graphics primitives with a multiresolution hash encoding. ACM TOG, 2022.
  33. OpenAI. Creating video from text, February 2024. URL https://openai.com/sora.
  34. Vision transformers for dense prediction. In CVPR, 2021.
  35. Superglue: Learning feature matching with graph neural networks. In CVPR, 2020.
  36. Structure-from-motion revisited. In CVPR, 2016.
  37. Splatt3r: Zero-shot gaussian splatting from uncalibrated image pairs. arXiv preprint arXiv:2408.13912, 2024.
  38. Flowcam: training generalizable 3d radiance fields without camera poses via pixel-aligned scene flow. arXiv preprint arXiv:2306.00180, 2023.
  39. Splatter image: Ultra-fast single-view 3d reconstruction. In CVPR, 2024.
  40. Lgm: Large multi-view gaussian model for high-resolution 3d content creation. arXiv preprint arXiv:2402.05054, 2024.
  41. Ibrnet: Learning multi-view image-based rendering. In CVPR, 2021a.
  42. Dust3r: Geometric 3d vision made easy. In CVPR, 2024.
  43. Image quality assessment: from error visibility to structural similarity. IEEE TIP, 2004.
  44. Nerf–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064, 2021b.
  45. Croco v2: Improved cross-view completion pre-training for stereo matching and optical flow. In ICCV, 2023.
  46. Murf: Multi-baseline radiance fields. In CVPR, 2024.
  47. Dmv3d: Denoising multi-view diffusion using 3d large reconstruction model. arXiv preprint arXiv:2311.09217, 2023.
  48. Scannet++: A high-fidelity dataset of 3d indoor scenes. In ICCV, 2023.
  49. pixelnerf: Neural radiance fields from one or few images. In CVPR, 2021.
  50. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
  51. Gps-gaussian: Generalizable pixel-wise 3d gaussian splatting for real-time human novel view synthesis. In CVPR, 2024.
  52. Stereo magnification: learning view synthesis using multiplane images. ACM TOG, 2018.
  53. Nice-slam: Neural implicit scalable encoding for slam. In CVPR, 2022.
  54. Nicer-slam: Neural implicit scene encoding for rgb slam. In 3DV, 2024.
Citations (5)

Summary

  • The paper demonstrates that 3D scene reconstruction can be achieved without camera pose data by leveraging a feed-forward network trained solely with photometric loss.
  • It introduces a canonical space approach where the first view’s local coordinates anchor the scene, effectively aligning 3D Gaussian primitives from sparse images.
  • Experimental results show NoPoSplat outperforms traditional methods in novel view synthesis and relative pose estimation under limited image overlap conditions.

An Expert Overview of "No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images"

The paper "No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images" introduces a novel method called NoPoSplat, which addresses a key challenge in generalizable 3D reconstruction: the reliance on accurate camera poses. This method facilitates reconstructing 3D scenes from sparse and unposed multi-view images, leveraging a feed-forward network that operates in real-time, and is noteworthy for its use of only photometric loss during training, bypassing the traditional requirement for detailed camera pose data.

Approach and Technical Details

NoPoSplat is centered around a canonical space approach, wherein the local camera coordinates of the first input view are adopted as a reference for the entire scene. This innovative tactic diminishes the complexities associated with transforming Gaussian primitives from local coordinate systems to a global system, a common hindrance in previous methods. By anchoring one view in this canonical space, the approach ensures the 3D Gaussians generated are effectively aligned across different inputs, even with minimal images.

To resolve scale ambiguities, NoPoSplat employs different intrinsic embedding techniques, ultimately finding that converting camera intrinsics into a token embedding and coupling it with image tokens considerably improves scene scale prediction capabilities. This adjustment sets the stage for using the estimated 3D Gaussians in tasks such as novel view synthesis and relative pose estimation.

Numerical Results and Comparisons

Rigorous experiments validate the supremacy of NoPoSplat over state-of-the-art methods. Noteworthy is its capability to surpass pose-required methods in scenarios with limited image overlap, a traditionally challenging situation for 3D reconstruction tasks. Furthermore, NoPoSplat is also remarkably effective in pose estimation without direct ground truth depth information or explicit matching losses, spotlighting its robustness compared to prevalent state-of-the-art techniques.

Implications and Future Directions

Practically, NoPoSplat's contribution lies in its ability to eliminate the dependency on pre-calculated camera poses, significantly enhancing the method's applicability in real-world scenarios where obtaining dense video input or accurate poses is cumbersome or impractical. Theoretically, this approach opens potential pathways in the scalability and generalizability of 3D reconstruction models using large-scale video datasets, as depth ground truth is not a prerequisite.

While successful, the approach is currently constrained to static scenes. Extending this work to dynamic scenarios, possibly integrating aspects of temporal coherence and motion dynamics, presents an intriguing direction for future advancements. Additionally, further exploration of incorporating more sophisticated intrinsic encoding strategies or adopting unsupervised learning paradigms might lead to even broader applicability across diverse scene types and data distributions.

Conclusion

In summary, the introduction of NoPoSplat adds a significant dimension to the domain of 3D reconstruction by demonstrating a feasible, practical method that circumvents the need for exhaustive camera pose data. This paper provides a foundation for future research aiming to refine AI-driven 3D reconstruction methods while also enhancing their practical deployment capabilities under variable operational constraints.