Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 173 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 38 tok/s Pro
GPT-4o 124 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases (2402.14308v1)

Published 22 Feb 2024 in cs.RO

Abstract: We introduce Ground-Fusion, a low-cost sensor fusion simultaneous localization and mapping (SLAM) system for ground vehicles. Our system features efficient initialization, effective sensor anomaly detection and handling, real-time dense color mapping, and robust localization in diverse environments. We tightly integrate RGB-D images, inertial measurements, wheel odometer and GNSS signals within a factor graph to achieve accurate and reliable localization both indoors and outdoors. To ensure successful initialization, we propose an efficient strategy that comprises three different methods: stationary, visual, and dynamic, tailored to handle diverse cases. Furthermore, we develop mechanisms to detect sensor anomalies and degradation, handling them adeptly to maintain system accuracy. Our experimental results on both public and self-collected datasets demonstrate that Ground-Fusion outperforms existing low-cost SLAM systems in corner cases. We release the code and datasets at https://github.com/SJTU-ViSYS/Ground-Fusion.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. J. Yin, C. Liang, X. Li, Q. Xu, H. Wang, T. Fan, Z. Wu, and Z. Zhang, “Design, sensing and control of service robotic system for intelligent navigation and operation in internet data centers,” in 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE).   IEEE, 2023, pp. 1–8.
  2. C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
  3. A. Martinelli, “Closed-form solution of visual-inertial structure from motion,” International Journal of Computer Vision, vol. 106, pp. 138–152, 2013.
  4. Z. Shan, R. Li, and S. Schwertfeger, “Rgbd-inertial trajectory estimation and mapping for ground robots,” Sensors, vol. 19, no. 10, p. 2251, 2019.
  5. K. J. Wu, C. X. Guo, G. Georgiou, and S. I. Roumeliotis, “Vins on wheels,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 5155–5162.
  6. T. Hua, L. Pei, T. Li, J. Yin, G. Liu, and W. Yu, “M2c-gvio: motion manifold constraint aided gnss-visual-inertial odometry for ground vehicles,” Satellite Navigation, vol. 4, no. 1, pp. 1–15, 2023.
  7. J. Yin, H. Jiang, J. Wang, D. Yan, and H. Yin, “A robust and efficient ekf-based gnss-visual-inertial odometry,” in 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO).   IEEE, 2023, pp. 1–5.
  8. S. Cao, X. Lu, and S. Shen, “Gvins: Tightly coupled gnss–visual–inertial fusion for smooth and consistent state estimation,” IEEE Transactions on Robotics, 2022.
  9. J. Yin, T. Li, H. Yin, W. Yu, and D. Zou, “Sky-gvins: a sky-segmentation aided gnss-visual-inertial system for robust navigation in urban canyons,” Geo-spatial Information Science, vol. 0, no. 0, pp. 1–11, 2023.
  10. J. Yin, A. Li, T. Li, W. Yu, and D. Zou, “M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots,” IEEE Robotics and Automation Letters, 2022.
  11. J. Yin, H. Yin, C. Liang, H. Jiang, and Z. Zhang, “Ground-challenge: A multi-sensor slam dataset focusing on corner cases for ground robots,” in 2023 IEEE International Conference on Robotics and Biomimetics (ROBIO).   IEEE, 2023, pp. 1–5.
  12. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  13. J. Liu, W. Gao, and Z. Hu, “Visual-inertial odometry tightly coupled with wheel encoder adopting robust initialization and online extrinsic calibration,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 5391–5397.
  14. L. Gui, C. Zeng, S. Dauchert, J. Luo, and X. Wang, “A zupt aided initialization procedure for tightly-coupled lidar inertial odometry based slam system,” Journal of Intelligent & Robotic Systems, vol. 108, no. 3, p. 40, 2023.
  15. C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, 2021.
  16. J. Liu, X. Li, Y. Liu, and H. Chen, “Rgb-d inertial odometry for a resource-restricted robot in dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9573–9580, 2022.
  17. C. Yu, Z. Liu, X.-J. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, “Ds-slam: A semantic visual slam towards dynamic environments,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 1168–1174.
  18. M. Quan, S. Piao, M. Tan, and S.-S. Huang, “Tightly-coupled monocular visual-odometric slam using wheels and a mems gyroscope,” IEEE Access, vol. 7, pp. 97 374–97 389, 2019.
  19. G. Peng, Z. Lu, S. Chen, D. He, and L. Xinde, “Pose estimation based on wheel speed anomaly detection in monocular visual-inertial slam,” IEEE Sensors Journal, vol. 21, no. 10, pp. 11 692–11 703, 2020.
  20. S. Hewitson and J. Wang, “Gnss receiver autonomous integrity monitoring (raim) performance analysis,” Gps Solutions, vol. 10, pp. 155–170, 2006.
  21. D. He, W. Xu, N. Chen, F. Kong, C. Yuan, and F. Zhang, “Point-lio: Robust high-bandwidth light detection and ranging inertial odometry,” Advanced Intelligent Systems, p. 2200459, 2023.
  22. J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in 2012 IEEE/RSJ international conference on intelligent robots and systems (IROS).   IEEE, 2012, pp. 573–580.
  23. K. Y. Leung, Y. Halpern, T. D. Barfoot, and H. H. Liu, “The utias multi-robot cooperative localization and mapping dataset,” The International Journal of Robotics Research, vol. 30, no. 8, pp. 969–974, 2011.
  24. X. Shi, D. Li, P. Zhao, Q. Tian, Y. Tian, Q. Long, C. Zhu, J. Song, F. Qiao, L. Song, et al., “Are we ready for service robots? the openloris-scene datasets for lifelong slam,” in 2020 IEEE international conference on robotics and automation (ICRA).   IEEE, 2020, pp. 3139–3145.
  25. I. Skog, P. Handel, J.-O. Nilsson, and J. Rantakokko, “Zero-velocity detection—an algorithm evaluation,” IEEE transactions on biomedical engineering, vol. 57, no. 11, pp. 2657–2666, 2010.
  26. J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, pp. 611–625, 2017.
  27. O. Kähler, V. A. Prisacariu, C. Y. Ren, X. Sun, P. Torr, and D. Murray, “Very high frame rate volumetric integration of depth images on mobile devices,” IEEE transactions on visualization and computer graphics, vol. 21, no. 11, pp. 1241–1250, 2015.
  28. T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison, “Elasticfusion: Dense slam without a pose graph.”   Robotics: Science and Systems, 2015.
  29. W. Wang, Y. Hu, and S. Scherer, “Tartanvo: A generalizable learning-based vo,” in Conference on Robot Learning.   PMLR, 2021, pp. 1761–1772.
  30. J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
Citations (2)

Summary

  • The paper presents an adaptable SLAM system that fuses RGB-D, IMU, odometry, and GNSS data for robust ground localization.
  • It employs adaptive initialization and real-time dense color mapping to effectively mitigate sensor faults like wheel slip and GNSS degradation.
  • Experimental results demonstrate superior performance in challenging corner cases, enhancing reliability during transitions between indoor and outdoor settings.

Efficient and Robust Ground-Fusion SLAM System Adapting to Diverse Environments and Anomalies

Introduction

Simultaneous Localization and Mapping (SLAM) is a core technology enabling ground robots to navigate effectively across diverse settings, from indoor corridors to outdoor terrains. Recent advancements have largely revolved around the integration of various sensing technologies to improve accuracy and reliability. The introduction of Ground-Fusion, a low-cost SLAM system, has made significant strides in this direction. By tightly coupling data from RGB-D cameras, inertial measurement units (IMUs), wheel odometers, and Global Navigation Satellite Systems (GNSS), Ground-Fusion presents an adaptable and precise solution capable of handling a variety of challenging scenarios and corner cases.

System Overview

Ground-Fusion leverages an innovative combination of sensing modalities within a factor graph optimization framework, focusing on efficient system initialization and robust sensor fusion. Key aspects include:

  • Adaptive Initialization: Employing stationary, visual, and dynamic strategies ensures the system can initialize accurately across varied motion states and environmental conditions.
  • Anomaly Detection and Handling: Sophisticated mechanisms are in place to identify and mitigate sensor faults or anomalies arising from visual obstacles, wheel slippage, or GNSS signal degradation.
  • Real-time Dense Color Mapping: By incorporating dense color mapping, Ground-Fusion aids in better navigation and offers a clearer understanding of the robot's surroundings.

System Evaluation

Considerable experiments validate the effectiveness and robustness of Ground-Fusion. Compared to existing low-cost SLAM systems, Ground-Fusion demonstrates superior performance, particularly in corner cases. Scenarios including transitions from outdoor to indoor environments, areas with dense tree cover affecting GNSS reliability, and settings prone to wheel anomalies underpin its robustness. Furthermore, the system's ability to efficiently handle low-speed movements and zero-velocity updates accentuates its adaptability.

Methodological Contributions

Ground-Fusion's methodology stands out in several ways:

  • Its approach to initialization is divergent, leveraging an adaptive mechanism that responds dynamically to the vehicle's motion state.
  • The system's anomaly detection and handling offer a nuanced way of maintaining accuracy even when individual sensor data might be compromised.
  • Through the strategic fusion of multi-sensor data, Ground-Fusion maintains a high level of accuracy in localization, effectively minimizing the trade-offs associated with using low-cost sensors.

Implications and Future Directions

The development of Ground-Fusion has practical implications for the deployment of ground robots across various industries, including logistics and manufacturing. Its ability to navigate reliably in both indoor and outdoor settings, coupled with its resilience to sensor anomalies, makes it particularly valuable where cost and operational efficiency are paramount.

The foundation laid by Ground-Fusion paves the way for future research in a few key areas. Exploring advanced sensor fusion algorithms that can further minimize the influence of environmental anomalies on localization accuracy remains a critical path. Additionally, enhancing the system's capability to handle dynamic and unpredictable environments through machine learning techniques could offer new avenues for improvement.

Conclusion

Ground-Fusion represents a significant step forward in the development of reliable, low-cost SLAM systems for ground vehicles. By intelligently integrating multiple sensors and emphasizing robustness in the face of anomalies, it establishes a new benchmark for performance in challenging conditions. As SLAM technology continues to evolve, Ground-Fusion lays the groundwork for future innovations aimed at creating even more versatile and resilient navigation systems for autonomous robots.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 96 likes.

Upgrade to Pro to view all of the tweets about this paper: