Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HDA-LVIO: A High-Precision LiDAR-Visual-Inertial Odometry in Urban Environments with Hybrid Data Association (2403.06590v1)

Published 11 Mar 2024 in cs.RO

Abstract: To enhance localization accuracy in urban environments, an innovative LiDAR-Visual-Inertial odometry, named HDA-LVIO, is proposed by employing hybrid data association. The proposed HDA_LVIO system can be divided into two subsystems: the LiDAR-Inertial subsystem (LIS) and the Visual-Inertial subsystem (VIS). In the LIS, the LiDAR pointcloud is utilized to calculate the Iterative Closest Point (ICP) error, serving as the measurement value of Error State Iterated Kalman Filter (ESIKF) to construct the global map. In the VIS, an incremental method is firstly employed to adaptively extract planes from the global map. And the centroids of these planes are projected onto the image to obtain projection points. Then, feature points are extracted from the image and tracked along with projection points using Lucas-Kanade (LK) optical flow. Next, leveraging the vehicle states from previous intervals, sliding window optimization is performed to estimate the depth of feature points. Concurrently, a method based on epipolar geometric constraints is proposed to address tracking failures for feature points, which can improve the accuracy of depth estimation for feature points by ensuring sufficient parallax within the sliding window. Subsequently, the feature points and projection points are hybridly associated to construct reprojection error, serving as the measurement value of ESIKF to estimate vehicle states. Finally, the localization accuracy of the proposed HDA-LVIO is validated using public datasets and data from our equipment. The results demonstrate that the proposed algorithm achieves obviously improvement in localization accuracy compared to various existing algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough, and A. Mouzakitis, “A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications,” IEEE Internet of Things Journal, vol. 5, no. 2, pp. 829–846, 2018.
  2. C. Cadena, L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard, “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age,” IEEE Transactions on robotics, vol. 32, no. 6, pp. 1309–1332, 2016.
  3. J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real-time.” in Robotics: Science and systems, vol. 2, no. 9.   Berkeley, CA, 2014, pp. 1–9.
  4. B. Zhou, C. Li, S. Chen, D. Xie, M. Yu, and Q. Li, “Asl-slam: A lidar slam with activity semantics-based loop closure,” IEEE Sensors Journal, vol. 23, no. 12, pp. 13 499–13 510, 2023.
  5. Z. Wang, Z. Zhang, X. Kang, M. Wu, S. Chen, and Q. Li, “Dor-lins: Dynamic objects removal lidar-inertial slam based on ground pseudo occupancy,” IEEE Sensors Journal, vol. 23, no. 20, pp. 24 907–24 915, 2023.
  6. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  7. T. Sun, Y. Liu, Y. Wang, and Z. Xiao, “An improved monocular visual-inertial navigation system,” IEEE Sensors Journal, vol. 21, no. 10, pp. 11 728–11 739, 2021.
  8. C.-C. Chou and C.-F. Chou, “Efficient and accurate tightly-coupled visual-lidar slam,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 14 509–14 523, 2022.
  9. T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 4758–4765.
  10. S. Park, T. Schöps, and M. Pollefeys, “Illumination change robustness in direct visual slam,” in 2017 IEEE international conference on robotics and automation (ICRA).   IEEE, 2017, pp. 4523–4530.
  11. H. Strasdat, J. Montiel, and A. J. Davison, “Scale drift-aware large scale monocular slam,” Robotics: science and Systems VI, vol. 2, no. 3, p. 7, 2010.
  12. Z. Wang, J. Zhang, S. Chen, C. Yuan, J. Zhang, and J. Zhang, “Robust high accuracy visual-inertial-laser slam system,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 6636–6641.
  13. X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, “Lic-fusion: Lidar-inertial-camera odometry,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2019, pp. 5848–5854.
  14. X. Zuo, Y. Yang, P. Geneva, J. Lv, Y. Liu, G. Huang, and M. Pollefeys, “Lic-fusion 2.0: Lidar-inertial-camera odometry with sliding-window plane-feature tracking,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 5112–5119.
  15. S. Zhao, H. Zhang, P. Wang, L. Nogueira, and S. Scherer, “Super odometry: Imu-centric lidar-visual-inertial estimator for challenging environments,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 8729–8736.
  16. Y. Jia, H. Luo, F. Zhao, G. Jiang, Y. Li, J. Yan, Z. Jiang, and Z. Wang, “Lvio-fusion: A self-adaptive multi-sensor fusion slam framework using actor-critic method,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2021, pp. 286–293.
  17. Y. Gao, J. Yuan, J. Jiang, Q. Sun, and X. Zhang, “Vido: A robust and consistent monocular visual-inertial-depth odometry,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 3, pp. 2976–2992, 2022.
  18. K. Huang and C. Stachniss, “Joint ego-motion estimation using a laser scanner and a monocular camera through relative orientation estimation and 1-dof icp,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 671–677.
  19. J. Zhang, M. Kaess, and S. Singh, “A real-time method for depth enhanced visual odometry,” Autonomous Robots, vol. 41, pp. 31–43, 2017.
  20. J. Zhang and S. Singh, “Laser–visual–inertial odometry and mapping with high robustness and low drift,” Journal of field robotics, vol. 35, no. 8, pp. 1242–1264, 2018.
  21. T. Shan, B. Englot, C. Ratti, and D. Rus, “Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping,” in 2021 IEEE international conference on robotics and automation (ICRA).   IEEE, 2021, pp. 5692–5698.
  22. Y. Wang and H. Ma, “mvil-fusion: Monocular visual-inertial-lidar simultaneous localization and mapping in challenging environments,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 504–511, 2022.
  23. C. Shu and Y. Luo, “Multi-modal feature constraint based tightly coupled monocular visual-lidar odometry and mapping,” IEEE Transactions on Intelligent Vehicles, 2022.
  24. H. Tang, X. Niu, T. Zhang, L. Wang, and J. Liu, “Le-vins: A robust solid-state-lidar-enhanced visual-inertial navigation system for low-speed robots,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–13, 2023.
  25. T. Lowe, S. Kim, and M. Cox, “Complementary perception for handheld slam,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1104–1111, 2018.
  26. Y. Zhu, C. Zheng, C. Yuan, X. Huang, and X. Hong, “Camvox: A low-cost and accurate lidar-assisted visual slam system,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 5049–5055.
  27. J. Yin, D. Luo, F. Yan, and Y. Zhuang, “A novel lidar-assisted monocular visual slam framework for mobile robots in outdoor environments,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–11, 2022.
  28. P. Wang, Z. Fang, S. Zhao, Y. Chen, M. Zhou, and S. An, “Vanishing point aided lidar-visual-inertial estimator,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 13 120–13 126.
  29. R. Giubilato, S. Chiodini, M. Pertile, and S. Debei, “Scale correct monocular visual odometry using a lidar altimeter,” in 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS).   IEEE, 2018, pp. 3694–3700.
  30. J. Graeter, A. Wilczynski, and M. Lauer, “Limo: Lidar-monocular visual odometry,” in 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS).   IEEE, 2018, pp. 7872–7879.
  31. Y.-S. Shin, Y. S. Park, and A. Kim, “Direct visual slam using sparse depth for camera-lidar system,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 5144–5151.
  32. Y.-S. Shin, Y. S. Park, and A. Kim, “Dvl-slam: Sparse depth enhanced direct visual-lidar slam,” Autonomous Robots, vol. 44, no. 2, pp. 115–130, 2020.
  33. K. Huang, J. Xiao, and C. Stachniss, “Accurate direct visual-laser odometry with explicit occlusion handling and plane detection,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 1295–1301.
  34. S.-S. Huang, Z.-Y. Ma, T.-J. Mu, H. Fu, and S.-M. Hu, “Lidar-monocular visual odometry using point and line features,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 1091–1097.
  35. C. Zheng, Q. Zhu, W. Xu, X. Liu, Q. Guo, and F. Zhang, “Fast-livo: Fast and tightly-coupled sparse-direct lidar-inertial-visual odometry,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 4003–4009.
  36. J. Lin and F. Zhang, “R 3 live: A robust, real-time, rgb-colored, lidar-inertial-visual tightly-coupled state estimation and mapping package,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 10 672–10 678.
  37. Z. Yuan, Q. Wang, K. Cheng, T. Hao, and X. Yang, “Sdv-loam: Semi-direct visual-lidar odometry and mapping,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  38. C. Harris, M. Stephens et al., “A combined corner and edge detector,” in Alvey vision conference, vol. 15, no. 50.   Citeseer, 1988, pp. 10–5244.
  39. M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. J. Leonard, and F. Dellaert, “isam2: Incremental smoothing and mapping using the bayes tree,” The International Journal of Robotics Research, vol. 31, no. 2, pp. 216–235, 2012.
  40. W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “Fast-lio2: Fast direct lidar-inertial odometry,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053–2073, 2022.
  41. E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt, and J. M. Ogden, “Pyramid methods in image processing,” RCA engineer, vol. 29, no. 6, pp. 33–41, 1984.
  42. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI’81: 7th international joint conference on Artificial intelligence, vol. 2, 1981, pp. 674–679.
  43. J. J. Moré, “The levenberg-marquardt algorithm: implementation and theory,” in Numerical analysis: proceedings of the biennial Conference held at Dundee, June 28–July 1, 1977.   Springer, 2006, pp. 105–116.
  44. D. He, W. Xu, and F. Zhang, “Kalman filters on differentiable manifolds,” arXiv preprint arXiv:2102.03804, 2021.
  45. O. Chum, J. Matas, and J. Kittler, “Locally optimized ransac,” in Pattern Recognition: 25th DAGM Symposium, Magdeburg, Germany, September 10-12, 2003. Proceedings 25.   Springer, 2003, pp. 236–243.
  46. J. Lin, C. Zheng, W. Xu, and F. Zhang, “R2live: A robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7469–7476, 2021.
  47. W. Xu and F. Zhang, “Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3317–3324, 2021.
  48. A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
  49. T.-M. Nguyen, S. Yuan, M. Cao, Y. Lyu, T. H. Nguyen, and L. Xie, “Ntu viral: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint,” The International Journal of Robotics Research, vol. 41, no. 3, pp. 270–280, 2022.
  50. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng et al., “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2.   Kobe, Japan, 2009, p. 5.
  51. M. Grupp, “evo: Python package for the evaluation of odometry and slam.” https://github.com/MichaelGrupp/evo, 2017.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jian Shi (53 papers)
  2. Wei Wang (1793 papers)
  3. Mingyang Qi (2 papers)
  4. Xin Li (980 papers)
  5. Ye Yan (22 papers)

Summary

We haven't generated a summary for this paper yet.