Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Square-Root Inverse Filter-based GNSS-Visual-Inertial Navigation (2405.10874v1)

Published 17 May 2024 in cs.RO

Abstract: While Global Navigation Satellite System (GNSS) is often used to provide global positioning if available, its intermittency and/or inaccuracy calls for fusion with other sensors. In this paper, we develop a novel GNSS-Visual-Inertial Navigation System (GVINS) that fuses visual, inertial, and raw GNSS measurements within the square-root inverse sliding window filtering (SRI-SWF) framework in a tightly coupled fashion, which thus is termed SRI-GVINS. In particular, for the first time, we deeply fuse the GNSS pseudorange, Doppler shift, single-differenced pseudorange, and double-differenced carrier phase measurements, along with the visual-inertial measurements. Inherited from the SRI-SWF, the proposed SRI-GVINS gains significant numerical stability and computational efficiency over the start-of-the-art methods. Additionally, we propose to use a filter to sequentially initialize the reference frame transformation till converges, rather than collecting measurements for batch optimization. We also perform online calibration of GNSS-IMU extrinsic parameters to mitigate the possible extrinsic parameter degradation. The proposed SRI-GVINS is extensively evaluated on our own collected UAV datasets and the results demonstrate that the proposed method is able to suppress VIO drift in real-time and also show the effectiveness of online GNSS-IMU extrinsic calibration. The experimental validation on the public datasets further reveals that the proposed SRI-GVINS outperforms the state-of-the-art methods in terms of both accuracy and efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. G. Huang, “Visual-inertial navigation: A concise review,” in 2019 international conference on robotics and automation (ICRA), pp. 9572–9582, IEEE, 2019.
  2. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  3. S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual–inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.
  4. C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
  5. L. Von Stumberg and D. Cremers, “Dm-vio: Delayed marginalization visual-inertial odometry,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1408–1415, 2022.
  6. A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in Proceedings 2007 IEEE international conference on robotics and automation, pp. 3565–3572, IEEE, 2007.
  7. M. Li and A. I. Mourikis, “High-precision, consistent ekf-based visual-inertial odometry,” The International Journal of Robotics Research, vol. 32, no. 6, pp. 690–711, 2013.
  8. P. Geneva, K. Eckenhoff, W. Lee, Y. Yang, and G. Huang, “Openvins: A research platform for visual-inertial estimation,” pp. 4666–4672, 2020.
  9. J. Kelly and G. S. Sukhatme, “Visual-inertial sensor fusion: Localization, mapping and sensor-to-sensor self-calibration,” The International Journal of Robotics Research, vol. 30, no. 1, pp. 56–79, 2011.
  10. J. A. Hesch, D. G. Kottas, S. L. Bowman, and S. I. Roumeliotis, “Observability-constrained vision-aided inertial navigation,” University of Minnesota, Dept. of Comp. Sci. & Eng., MARS Lab, Tech. Rep, vol. 1, p. 6, 2012.
  11. C. V. Angelino, V. R. Baraniello, and L. Cicala, “Uav position and attitude estimation using imu, gnss and camera,” in 2012 15th International Conference on Information Fusion, pp. 735–742, IEEE, 2012.
  12. S. Lynen, M. W. Achtelik, S. Weiss, M. Chli, and R. Siegwart, “A robust and modular multi-sensor fusion approach applied to mav navigation,” in 2013 IEEE/RSJ international conference on intelligent robots and systems, pp. 3923–3929, IEEE, 2013.
  13. S. Shen, Y. Mulgaonkar, N. Michael, and V. Kumar, “Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft mav,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 4974–4981, IEEE, 2014.
  14. M. Schreiber, H. Königshof, A.-M. Hellmund, and C. Stiller, “Vehicle localization with tightly coupled gnss and visual odometry,” in 2016 IEEE Intelligent Vehicles Symposium (IV), pp. 858–863, IEEE, 2016.
  15. R. Mascaro, L. Teixeira, T. Hinzmann, R. Siegwart, and M. Chli, “Gomsf: Graph-optimization based multi-sensor fusion for robust uav pose estimation,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1421–1428, IEEE, 2018.
  16. Y. Yu, W. Gao, C. Liu, S. Shen, and M. Liu, “A gps-aided omnidirectional visual-inertial state estimator in ubiquitous environments,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7750–7755, IEEE, 2019.
  17. T. Qin, S. Cao, J. Pan, and S. Shen, “A general optimization-based framework for global pose estimation with multiple sensors,” arXiv preprint arXiv:1901.03642, 2019.
  18. Z. Gong, P. Liu, F. Wen, R. Ying, X. Ji, R. Miao, and W. Xue, “Graph-based adaptive fusion of gnss and vio under intermittent gnss-degraded environment,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–16, 2020.
  19. W. Lee, K. Eckenhoff, P. Geneva, and G. Huang, “Intermittent gps-aided vio: Online initialization and calibration,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 5724–5731, IEEE, 2020.
  20. L. Xiong, R. Kang, J. Zhao, P. Zhang, M. Xu, R. Ju, C. Ye, and T. Feng, “G-vido: A vehicle dynamics and intermittent gnss-aided visual-inertial state estimator for autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 8, pp. 11845–11861, 2021.
  21. D. H. Won, E. Lee, M. Heo, S. Sung, J. Lee, and Y. J. Lee, “Gnss integration with vision-based navigation for low gnss visibility conditions,” GPS solutions, vol. 18, pp. 177–187, 2014.
  22. T. Li, H. Zhang, Z. Gao, X. Niu, and N. El-Sheimy, “Tight fusion of a monocular camera, mems-imu, and single-frequency multi-gnss rtk for precise navigation in gnss-challenged environments,” Remote Sensing, vol. 11, no. 6, p. 610, 2019.
  23. S. Cao, X. Lu, and S. Shen, “Gvins: Tightly coupled gnss–visual–inertial fusion for smooth and consistent state estimation,” IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2004–2021, 2022.
  24. J. Liu, W. Gao, and Z. Hu, “Optimization-based visual-inertial slam tightly coupled with raw gnss measurements,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 11612–11618, IEEE, 2021.
  25. T. Li, L. Pei, Y. Xiang, W. Yu, and T.-K. Truong, “P3-vins: Tightly-coupled ppp/ins/visual slam based on optimization approach,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7021–7027, 2022.
  26. W. Lee, P. Geneva, Y. Yang, and G. Huang, “Tightly-coupled gnss-aided visual-inertial localization,” in 2022 International Conference on Robotics and Automation (ICRA), pp. 9484–9491, IEEE, 2022.
  27. C. Liu, C. Jiang, and H. Wang, “Variable observability constrained visual-inertial-gnss ekf-based navigation,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6677–6684, 2022.
  28. C. Liu, C. Jiang, and H. Wang, “Ingvio: A consistent invariant filter for fast and high-accuracy gnss-visual-inertial odometry,” IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1850–1857, 2023.
  29. Academic press, 1982.
  30. G. J. Bierman, Factorization methods for discrete sequential estimation. Courier Corporation, 2006.
  31. K. Wu, A. M. Ahmed, G. A. Georgiou, and S. I. Roumeliotis, “A square root inverse filter for efficient vision-aided inertial navigation on mobile devices.,” in Robotics: Science and Systems, vol. 2, p. 2, Rome, Italy, 2015.
  32. J. Civera, A. J. Davison, and J. M. Montiel, “Inverse depth parametrization for monocular slam,” IEEE transactions on robotics, vol. 24, no. 5, pp. 932–945, 2008.
  33. N. Trawny and S. I. Roumeliotis, “Indirect kalman filter for 3d attitude estimation,” University of Minnesota, Dept. of Comp. Sci. & Eng., Tech. Rep, vol. 2, p. 2005, 2005.
  34. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
  35. M. Li, Visual-inertial odometry on resource-constrained systems. University of California, Riverside, 2014.
  36. C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, 2016.
  37. J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of rgb-d slam systems,” in 2012 IEEE/RSJ international conference on intelligent robots and systems, pp. 573–580, IEEE, 2012.
  38. J. Temiissen, “The least-squares ambiguity decorrelation adjustment: A method for fast gps integer ambiguity estimation,” Journal of Geodesy, vol. 70, pp. 65–82, 1995.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com