Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gaussian Grouping: Segment and Edit Anything in 3D Scenes (2312.00732v2)

Published 1 Dec 2023 in cs.CV and cs.AI

Abstract: The recent Gaussian Splatting achieves high-quality and real-time novel-view synthesis of the 3D scenes. However, it is solely concentrated on the appearance and geometry modeling, while lacking in fine-grained object-level scene understanding. To address this issue, we propose Gaussian Grouping, which extends Gaussian Splatting to jointly reconstruct and segment anything in open-world 3D scenes. We augment each Gaussian with a compact Identity Encoding, allowing the Gaussians to be grouped according to their object instance or stuff membership in the 3D scene. Instead of resorting to expensive 3D labels, we supervise the Identity Encodings during the differentiable rendering by leveraging the 2D mask predictions by Segment Anything Model (SAM), along with introduced 3D spatial consistency regularization. Compared to the implicit NeRF representation, we show that the discrete and grouped 3D Gaussians can reconstruct, segment and edit anything in 3D with high visual quality, fine granularity and efficiency. Based on Gaussian Grouping, we further propose a local Gaussian Editing scheme, which shows efficacy in versatile scene editing applications, including 3D object removal, inpainting, colorization, style transfer and scene recomposition. Our code and models are at https://github.com/lkeab/gaussian-grouping.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In CVPR, 2022.
  2. Instructpix2pix: Learning to follow image editing instructions. In CVPR, 2023.
  3. Emerging properties in self-supervised vision transformers. In ICCV, 2021.
  4. Segment anything in 3d with nerfs. In NeurIPS, 2023.
  5. Interactive segment anything nerf with feature imitation. arXiv preprint arXiv:2305.16233, 2023a.
  6. Text-to-3d using gaussian splatting. arXiv preprint arXiv:2309.16585, 2023b.
  7. Tracking anything with decoupled video segmentation. In ICCV, 2023.
  8. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017.
  9. Panoptic nerf: 3d-to-2d label transfer for panoptic urban scene segmentation. In International Conference on 3D Vision (3DV), 2022.
  10. Instruct-nerf2nerf: Editing 3d scenes with instructions. In ICCV, 2023.
  11. CoNeRF: Controllable Neural Radiance Fields. In CVPR, 2022.
  12. Segment anything in high quality. In NeurIPS, 2023.
  13. 3d gaussian splatting for real-time radiance field rendering. ACM TOG, 42(4):1–14, 2023.
  14. Lerf: Language embedded radiance fields. In ICCV, 2023.
  15. Segment anything. In ICCV, 2023.
  16. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (ToG), 36(4):1–13, 2017.
  17. Decomposing nerf for editing via feature field distillation. In NeurIPS, 2022.
  18. Point-based neural rendering with per-view optimization. In Computer Graphics Forum, pages 29–43, 2021.
  19. Neural point catacaustics for novel-view synthesis of reflections. ACM TOG, 41(6):1–15, 2022.
  20. Panoptic neural fields: A semantic object-aware neural scene representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12871–12881, 2022.
  21. Climatenerf: Extreme weather synthesis in neural radiance field. In ICCV, 2023.
  22. Nerf-in: Free-form nerf inpainting with rgb-d priors. arXiv preprint arXiv:2206.04901, 2022.
  23. Editing conditional radiance fields. In ICCV, 2021.
  24. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. arXiv preprint arXiv:2303.05499, 2023a.
  25. Instance neural radiance field. In ICCV, 2023b.
  26. Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis. arXiv preprint arXiv:2308.09713, 2023.
  27. Nelson Max. Optical models for direct volume rendering. IEEE TVCG, 1(2):99–108, 1995.
  28. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019.
  29. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  30. Laterf: Label and text driven object radiance fields. In ECCV, 2022.
  31. Spin-nerf: Multiview segmentation and perceptual inpainting with neural radiance fields. In CVPR, 2023.
  32. Neural scene graphs for dynamic scenes. In CVPR, 2021.
  33. Openscene: 3d scene understanding with open vocabularies. In CVPR, 2023.
  34. Learning transferable visual models from natural language supervision. In ICML, 2021.
  35. Derf: Decomposed radiance fields. In CVPR, 2021.
  36. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
  37. Mask3D: Mask Transformer for 3D Semantic Instance Segmentation. In ICRA, 2023.
  38. Anything-3d: Towards single-view anything reconstruction in the wild. arXiv preprint arXiv:2304.10261, 2023.
  39. Panoptic lifting for 3d scene understanding with neural fields. In CVPR, 2023.
  40. Photo tourism: exploring photo collections in 3d. In ACM siggraph, 2006.
  41. Resolution-robust large mask inpainting with fourier convolutions. In WACV, 2022.
  42. OpenMask3D: Open-Vocabulary 3D Instance Segmentation. In NeurIPS, 2023.
  43. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. arXiv preprint arXiv:2309.16653, 2023.
  44. Neural feature fusion fields: 3d distillation of self-supervised 2d image representations. arXiv preprint arXiv:2209.03494, 2022a.
  45. Neural feature fusion fields: 3D distillation of self-supervised 2D image representations. In International Conference on 3D Vision (3DV), 2022b.
  46. Nesf: Neural semantic fields for generalizable semantic segmentation of 3d scenes, 2021.
  47. Dm-nerf: 3d scene geometry decomposition and manipulation from 2d images. arXiv preprint arXiv:2208.07227, 2022a.
  48. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. In CVPR, 2022b.
  49. 4d gaussian splatting for real-time dynamic scene rendering. arXiv preprint arXiv:2310.08528, 2023.
  50. Learning object-compositional neural radiance field for editable scene rendering. In ICCV, 2021.
  51. Sam3d: Segment anything in 3d scenes. arXiv preprint arXiv:2306.03908, 2023a.
  52. Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction. arXiv preprint arXiv:2309.13101, 2023b.
  53. Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting. arXiv preprint arXiv:2310.10642, 2023c.
  54. Gaussiandreamer: Fast generation from text to 3d gaussian splatting with point cloud priors. arXiv preprint arXiv:2310.08529, 2023.
  55. Differentiable surface splatting for point-based geometry processing. ACM Transactions on Graphics (TOG), 38(6):1–14, 2019.
  56. Unsupervised discovery of object radiance fields. arXiv preprint arXiv:2107.07905, 2021.
  57. Nerf-editing: geometry editing of neural radiance fields. In CVPR, 2022.
  58. Faster segment anything: Towards lightweight sam for mobile applications. arXiv preprint arXiv:2306.14289, 2023.
  59. Editable free-viewpoint video using a layered neural representation. ACM Transactions on Graphics (TOG), 40(4):1–18, 2021.
  60. In-place scene labelling and understanding with implicit scene representation. In ICCV, 2021.
  61. Surface splatting. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 371–378, 2001.
Citations (105)

Summary

  • The paper introduces a novel 3D Identity Encoding that groups 3D Gaussians using 2D SAM mask predictions for detailed scene segmentation.
  • The method combines cross-entropy and 3D spatial consistency losses to supervise segmentation without expensive 3D labels.
  • The approach doubles segmentation accuracy over LERF while enabling efficient, flexible editing for tasks like object removal and style transfer.

Gaussian Grouping: Segment and Edit Anything in 3D Scenes

This paper presents "Gaussian Grouping," a novel method that extends Gaussian Splatting to reconstruct and segment objects in open-world 3D scenes. The authors propose augmenting each 3D Gaussian with a compact Identity Encoding to allow for object instance grouping. This method leverages 2D mask predictions, specifically from Segment Anything Model (SAM), to supervise 3D Identity Encodings, addressing the lack of fine-grained scene understanding in traditional Gaussian Splatting.

Methodology

Gaussian Splatting achieves high-quality, real-time novel-view synthesis, yet it focuses solely on appearance and geometry. The proposed Gaussian Grouping integrates segmentation with reconstruction, enabling it to model objects and elements in 3D environments with high granularity and efficiency.

The paper introduces an Identity Encoding to each Gaussian, acting as a learnable vector facilitating the grouping of 3D Gaussians by instance or stuff identity. A differentiable rendering process supervises these encodings, utilizing 2D segmentation masks from SAM augmented with a 3D spatial consistency regularization. This approach circumvents the need for expensive 3D labels, instead using SAM's zero-shot capabilities to extend 2D understanding into 3D space.

The training objective combines a cross-entropy loss for 2D identity supervision and a 3D regularization loss to maintain spatial consistency, thereby enhancing grouping accuracy.

Numerical Results

Gaussian Grouping demonstrates strong segmentation performance, doubling the accuracy of existing methods such as LERF in certain datasets. It maintains effective 3D reconstruction quality without impacting the high performance achieved by Gaussian Splatting. The model offers remarkable efficiency, providing fine-grained scenes supporting diverse editing applications with minimal computational overhead.

Implications and Applications

The primary contribution is representing open-world scenes via grouped 3D Gaussians, supporting direct manipulations without finetuning the entire model. This has practical implications for advanced scene editing tasks such as 3D object removal, inpainting, and colorization. The decoupled and discrete representation enables efficient local edits, allowing multiple simultaneous operations such as object removal and style transfer.

With the grouping scheme, versatile and contextually aware scene editing can be achieved directly through these 3D representations, promising significant improvements in fields like AR/VR, robotics, and autonomous systems. This model provides a foundation for future development in efficient, real-time interaction with complex 3D environments.

Future Developments

Future work may extend Gaussian Grouping to dynamic scenes, investigating methods for fully unsupervised scene understanding. Furthermore, enhancing the segmentation with semantic language information could improve practical applicability in domains requiring precise object identification.

In conclusion, Gaussian Grouping offers an innovative solution for integrating high-quality visual synthesis with actionable scene understanding, providing a framework that bridges a gap in 3D scene interaction and editing capabilities.

X Twitter Logo Streamline Icon: https://streamlinehq.com