NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising (2403.20034v1)
Abstract: In recent years, there have been significant advancements in 3D reconstruction and dense RGB-D SLAM systems. One notable development is the application of Neural Radiance Fields (NeRF) in these systems, which utilizes implicit neural representation to encode 3D scenes. This extension of NeRF to SLAM has shown promising results. However, the depth images obtained from consumer-grade RGB-D sensors are often sparse and noisy, which poses significant challenges for 3D reconstruction and affects the accuracy of the representation of the scene geometry. Moreover, the original hierarchical feature grid with occupancy value is inaccurate for scene geometry representation. Furthermore, the existing methods select random pixels for camera tracking, which leads to inaccurate localization and is not robust in real-world indoor environments. To this end, we present NeSLAM, an advanced framework that achieves accurate and dense depth estimation, robust camera tracking, and realistic synthesis of novel views. First, a depth completion and denoising network is designed to provide dense geometry prior and guide the neural implicit representation optimization. Second, the occupancy scene representation is replaced with Signed Distance Field (SDF) hierarchical scene representation for high-quality reconstruction and view synthesis. Furthermore, we also propose a NeRF-based self-supervised feature tracking algorithm for robust real-time tracking. Experiments on various indoor datasets demonstrate the effectiveness and accuracy of the system in reconstruction, tracking quality, and novel view synthesis.
- R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
- T. Deng, H. Xie, J. Wang, and W. Chen, “Long-term visual simultaneous localization and mapping: Using a bayesian persistence filter-based global map prediction,” IEEE Robotics & Automation Magazine, vol. 30, no. 1, pp. 36–49, 2023.
- H. Xie, T. Deng, J. Wang, and W. Chen, “Robust incremental long-term visual topological localization in changing environments,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–14, 2022.
- ——, “Angular tracking consistency guided fast feature association for visual-inertial slam,” IEEE Transactions on Instrumentation and Measurement, 2024.
- R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “Dtam: Dense tracking and mapping in real-time,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2011, pp. 2320–2327.
- T. Schops, T. Sattler, and M. Pollefeys, “Bad slam: Bundle adjusted direct rgb-d slam,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 134–144.
- H. Matsuki, R. Scona, J. Czarnowski, and A. J. Davison, “Codemapping: Real-time dense mapping for sparse slam using compact scene representations,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7105–7112, 2021.
- M. U. M. Bhutta, M. Kuse, R. Fan, Y. Liu, and M. Liu, “Loop-box: Multiagent direct slam triggered by single loop closure for large-scale mapping,” IEEE Transactions on Cybernetics, vol. 52, no. 6, pp. 5088–5097, 2022.
- J. Liu, R. Yu, Y. Wang, Y. Zheng, T. Deng, W. Ye, and H. Wang, “Point mamba: A novel point cloud backbone based on state space model with octree-based ordering strategy,” arXiv preprint arXiv:2403.06467, 2024.
- X. Liu, Z. Lin, Y. Niu, Z. Lyu, Q. Xu, B. Cui, and T. Deng, “A multi-uav cooperative search system design based on man-in-the-loop,” in 2020 3rd International Conference on Unmanned Systems (ICUS). IEEE, 2020, pp. 757–762.
- T. Deng, “Research on aerial robot based on visual servo,” in Journal of Physics: Conference Series, vol. 1678, no. 1. IOP Publishing, 2020, p. 012007.
- M. Bloesch, J. Czarnowski, R. Clark, S. Leutenegger, and A. J. Davison, “Codeslam—learning a compact, optimisable representation for dense visual slam,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2560–2568.
- S. Zhi, M. Bloesch, S. Leutenegger, and A. J. Davison, “Scenecode: Monocular dense semantic reconstruction using learned encoded scene representations,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 776–11 785.
- Z. Teed and J. Deng, “Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras,” Advances in neural information processing systems, vol. 34, pp. 16 558–16 569, 2021.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in European Conference on Computer Vision, 2020.
- T. Deng, S. Liu, X. Wang, Y. Liu, D. Wang, and W. Chen, “Prosgnerf: Progressive dynamic neural scene graph with frequency modulated auto-encoder in urban scenes,” arXiv preprint arXiv:2312.09076, 2023.
- E. Sucar, S. Liu, J. Ortiz, and A. J. Davison, “imap: Implicit mapping and positioning in real-time,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2021, pp. 6229–6238.
- Z. Zhu, S. Peng, V. Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys, “Nice-slam: Neural implicit scalable encoding for slam,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2022, pp. 12 786–12 796.
- T. Deng, G. Shen, T. Qin, J. Wang, W. Zhao, J. Wang, D. Wang, and W. Chen, “Plgslam: Progressive neural scene represenation with local to global bundle adjustment,” arXiv preprint arXiv:2312.09866, 2023.
- M. Li, S. Liu, and H. Zhou, “Sgs-slam: Semantic gaussian splatting for neural dense slam,” arXiv preprint arXiv:2402.03246, 2024.
- M. Li, J. He, G. Jiang, and H. Wang, “Ddn-slam: Real-time dense dynamic neural implicit slam with joint semantic encoding,” arXiv preprint arXiv:2401.01545, 2024.
- T. Deng, Y. Chen, L. Zhang, J. Yang, S. Yuan, D. Wang, and W. Chen, “Compact 3d gaussian splatting for dense visual slam,” arXiv preprint arXiv:2403.11247, 2024.
- G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in Proceedings of the IEEE/ACM International Conference on Symposium on Mixed and Augmented Reality, 2007, pp. 225–234.
- T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
- B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox, “Demon: Depth and motion network for learning monocular stereo,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, July 2017.
- S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison et al., “Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, 2011, pp. 559–568.
- A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt, “Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration,” ACM Transactions on Graphics (ToG), vol. 36, no. 4, 2017.
- C. Tang and P. Tan, “Ba-net: Dense bundle adjustment network,” in Proceedings of the International Conference on Learning Representations, September 2018.
- X. Gao, X. Liu, Z. Cao, M. Tan, and J. Yu, “Dynamic rigid bodies mining and motion estimation based on monocular camera,” IEEE Transactions on Cybernetics, pp. 1–12, 2022.
- R. Fan, U. Ozgunalp, Y. Wang, M. Liu, and I. Pitas, “Rethinking road surface 3-d reconstruction and pothole detection: From perspective transformation to disparity map segmentation,” IEEE Transactions on Cybernetics, vol. 52, no. 7, pp. 5799–5808, 2022.
- S. Zhao, X. Wang, D. Zhang, G. Zhang, Z. Wang, and H. Liu, “Fm-3dfr: Facial manipulation-based 3-d face reconstruction,” IEEE Transactions on Cybernetics, pp. 1–10, 2023.
- M. Oechsle, S. Peng, and A. Geiger, “Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2021, pp. 5589–5599.
- P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, “Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 27 171–27 183.
- D. Azinović, R. Martin-Brualla, D. B. Goldman, M. Nießner, and J. Thies, “Neural rgb-d surface reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2022, pp. 6290–6301.
- A. Bozic, P. Palafox, J. Thies, A. Dai, and M. Niessner, “Transformerfusion: Monocular rgb scene reconstruction using transformers,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 1403–1414.
- J. Choe, S. Im, F. Rameau, M. Kang, and I. S. Kweon, “Volumefusion: Deep depth fusion for 3d scene reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2021, pp. 16 086–16 095.
- J. Sun, Y. Xie, L. Chen, X. Zhou, and H. Bao, “Neuralrecon: Real-time coherent 3d reconstruction from monocular video,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2021, pp. 15 598–15 607.
- C. Xia, Y. Shen, Y. Yang, X. Deng, S. Chen, J. Xin, and N. Zheng, “Onboard sensors-based self-localization for autonomous vehicle with hierarchical map,” IEEE Transactions on Cybernetics, vol. 53, no. 7, pp. 4218–4231, 2023.
- L. Yen-Chen, P. Florence, J. T. Barron, A. Rodriguez, P. Isola, and T.-Y. Lin, “inerf: Inverting neural radiance fields for pose estimation,” in Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021, pp. 1323–1330.
- Z. Wang, S. Wu, W. Xie, M. Chen, and V. A. Prisacariu, “Nerf–: Neural radiance fields without known camera parameters,” arXiv preprint arXiv:2102.07064, 2021.
- C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey, “Barf: Bundle-adjusting neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, October 2021, pp. 5741–5751.
- B. Roessle, J. T. Barron, B. Mildenhall, P. P. Srinivasan, and M. Nießner, “Dense depth priors for neural radiance fields from sparse input views,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2022, pp. 12 892–12 901.
- X. Cheng, P. Wang, and R. Yang, “Learning depth with convolutional spatial propagation network,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 10, pp. 2361–2379, 2020.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2016.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proceeding of the International Conference in Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241.
- J. Park, K. Joo, Z. Hu, C.-K. Liu, and I. So Kweon, “Non-local spatial propagation network for depth completion,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 120–136.
- J. Straub, T. Whelan, L. Ma, Y. Chen, E. Wijmans, S. Green, J. J. Engel, R. Mur-Artal, C. Ren, S. Verma et al., “The replica dataset: A digital replica of indoor spaces,” arXiv preprint arXiv:1906.05797, 2019.
- A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Niessner, “Scannet: Richly-annotated 3d reconstructions of indoor scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 2017.
- J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 573–580.
- L. Yariv, J. Gu, Y. Kasten, and Y. Lipman, “Volume rendering of neural implicit surfaces,” in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 4805–4815.
- M. Tancik, P. Srinivasan, B. Mildenhall, and Fridovich-Keil, “Fourier features let networks learn high frequency functions in low dimensional domains,” in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 7537–7547.
- D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 2018.
- T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in Proceedings of the European Conference on Computer Vision, 2014, pp. 740–755.
- A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization for learning shapes,” in Proceeding of the International Conference on Machine Learning, 2020, pp. 3789–3799.
- D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of the International Conference on Learning Representations, 2015.
- J. Huang, S.-S. Huang, H. Song, and S.-M. Hu, “Di-fusion: Online implicit 3d reconstruction with deep priors,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8932–8941.
- T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison, “Elasticfusion: Dense slam without a pose graph.” Robotics: Science and Systems, 2015.