Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases (2402.14308v1)
Abstract: We introduce Ground-Fusion, a low-cost sensor fusion simultaneous localization and mapping (SLAM) system for ground vehicles. Our system features efficient initialization, effective sensor anomaly detection and handling, real-time dense color mapping, and robust localization in diverse environments. We tightly integrate RGB-D images, inertial measurements, wheel odometer and GNSS signals within a factor graph to achieve accurate and reliable localization both indoors and outdoors. To ensure successful initialization, we propose an efficient strategy that comprises three different methods: stationary, visual, and dynamic, tailored to handle diverse cases. Furthermore, we develop mechanisms to detect sensor anomalies and degradation, handling them adeptly to maintain system accuracy. Our experimental results on both public and self-collected datasets demonstrate that Ground-Fusion outperforms existing low-cost SLAM systems in corner cases. We release the code and datasets at https://github.com/SJTU-ViSYS/Ground-Fusion.
- J. Yin, C. Liang, X. Li, Q. Xu, H. Wang, T. Fan, Z. Wu, and Z. Zhang, “Design, sensing and control of service robotic system for intelligent navigation and operation in internet data centers,” in 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE). IEEE, 2023, pp. 1–8.
- C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
- A. Martinelli, “Closed-form solution of visual-inertial structure from motion,” International Journal of Computer Vision, vol. 106, pp. 138–152, 2013.
- Z. Shan, R. Li, and S. Schwertfeger, “Rgbd-inertial trajectory estimation and mapping for ground robots,” Sensors, vol. 19, no. 10, p. 2251, 2019.
- K. J. Wu, C. X. Guo, G. Georgiou, and S. I. Roumeliotis, “Vins on wheels,” in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017, pp. 5155–5162.
- T. Hua, L. Pei, T. Li, J. Yin, G. Liu, and W. Yu, “M2c-gvio: motion manifold constraint aided gnss-visual-inertial odometry for ground vehicles,” Satellite Navigation, vol. 4, no. 1, pp. 1–15, 2023.
- J. Yin, H. Jiang, J. Wang, D. Yan, and H. Yin, “A robust and efficient ekf-based gnss-visual-inertial odometry,” in 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2023, pp. 1–5.
- S. Cao, X. Lu, and S. Shen, “Gvins: Tightly coupled gnss–visual–inertial fusion for smooth and consistent state estimation,” IEEE Transactions on Robotics, 2022.
- J. Yin, T. Li, H. Yin, W. Yu, and D. Zou, “Sky-gvins: a sky-segmentation aided gnss-visual-inertial system for robust navigation in urban canyons,” Geo-spatial Information Science, vol. 0, no. 0, pp. 1–11, 2023.
- J. Yin, A. Li, T. Li, W. Yu, and D. Zou, “M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots,” IEEE Robotics and Automation Letters, 2022.
- J. Yin, H. Yin, C. Liang, H. Jiang, and Z. Zhang, “Ground-challenge: A multi-sensor slam dataset focusing on corner cases for ground robots,” in 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 2023, pp. 1–5.
- T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
- J. Liu, W. Gao, and Z. Hu, “Visual-inertial odometry tightly coupled with wheel encoder adopting robust initialization and online extrinsic calibration,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 5391–5397.
- L. Gui, C. Zeng, S. Dauchert, J. Luo, and X. Wang, “A zupt aided initialization procedure for tightly-coupled lidar inertial odometry based slam system,” Journal of Intelligent & Robotic Systems, vol. 108, no. 3, p. 40, 2023.
- C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, 2021.
- J. Liu, X. Li, Y. Liu, and H. Chen, “Rgb-d inertial odometry for a resource-restricted robot in dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9573–9580, 2022.
- C. Yu, Z. Liu, X.-J. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, “Ds-slam: A semantic visual slam towards dynamic environments,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1168–1174.
- M. Quan, S. Piao, M. Tan, and S.-S. Huang, “Tightly-coupled monocular visual-odometric slam using wheels and a mems gyroscope,” IEEE Access, vol. 7, pp. 97 374–97 389, 2019.
- G. Peng, Z. Lu, S. Chen, D. He, and L. Xinde, “Pose estimation based on wheel speed anomaly detection in monocular visual-inertial slam,” IEEE Sensors Journal, vol. 21, no. 10, pp. 11 692–11 703, 2020.
- S. Hewitson and J. Wang, “Gnss receiver autonomous integrity monitoring (raim) performance analysis,” Gps Solutions, vol. 10, pp. 155–170, 2006.
- D. He, W. Xu, N. Chen, F. Kong, C. Yuan, and F. Zhang, “Point-lio: Robust high-bandwidth light detection and ranging inertial odometry,” Advanced Intelligent Systems, p. 2200459, 2023.
- J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2012, pp. 573–580.
- K. Y. Leung, Y. Halpern, T. D. Barfoot, and H. H. Liu, “The utias multi-robot cooperative localization and mapping dataset,” The International Journal of Robotics Research, vol. 30, no. 8, pp. 969–974, 2011.
- X. Shi, D. Li, P. Zhao, Q. Tian, Y. Tian, Q. Long, C. Zhu, J. Song, F. Qiao, L. Song, et al., “Are we ready for service robots? the openloris-scene datasets for lifelong slam,” in 2020 IEEE international conference on robotics and automation (ICRA). IEEE, 2020, pp. 3139–3145.
- I. Skog, P. Handel, J.-O. Nilsson, and J. Rantakokko, “Zero-velocity detection—an algorithm evaluation,” IEEE transactions on biomedical engineering, vol. 57, no. 11, pp. 2657–2666, 2010.
- J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, pp. 611–625, 2017.
- O. Kähler, V. A. Prisacariu, C. Y. Ren, X. Sun, P. Torr, and D. Murray, “Very high frame rate volumetric integration of depth images on mobile devices,” IEEE transactions on visualization and computer graphics, vol. 21, no. 11, pp. 1241–1250, 2015.
- T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison, “Elasticfusion: Dense slam without a pose graph.” Robotics: Science and Systems, 2015.
- W. Wang, Y. Hu, and S. Scherer, “Tartanvo: A generalizable learning-based vo,” in Conference on Robot Learning. PMLR, 2021, pp. 1761–1772.
- J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.