Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PO-MSCKF: An Efficient Visual-Inertial Odometry by Reconstructing the Multi-State Constrained Kalman Filter with the Pose-only Theory (2407.01888v1)

Published 2 Jul 2024 in cs.RO and cs.CV

Abstract: Efficient Visual-Inertial Odometry (VIO) is crucial for payload-constrained robots. Though modern optimization-based algorithms have achieved superior accuracy, the MSCKF-based VIO algorithms are still widely demanded for their efficient and consistent performance. As MSCKF is built upon the conventional multi-view geometry, the measured residuals are not only related to the state errors but also related to the feature position errors. To apply EKF fusion, a projection process is required to remove the feature position error from the observation model, which can lead to model and accuracy degradation. To obtain an efficient visual-inertial fusion model, while also preserving the model consistency, we propose to reconstruct the MSCKF VIO with the novel Pose-Only (PO) multi-view geometry description. In the newly constructed filter, we have modeled PO reprojection residuals, which are solely related to the motion states and thus overcome the requirements of space projection. Moreover, the new filter does not require any feature position information, which removes the computational cost and linearization errors brought in by the 3D reconstruction procedure. We have conducted comprehensive experiments on multiple datasets, where the proposed method has shown accuracy improvements and consistent performance in challenging sequences.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in Proc. IEEE Int. Conf. Robot. Automat., 2007, pp. 3565–3572.
  2. P. Geneva, K. Eckenhoff, W. Lee, Y. Yang, and G. Huang, “Openvins: A research platform for visual-inertial estimation,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 4666–4672.
  3. M. Bloesch, M. Burri, S. Omari, M. Hutter, and R. Siegwart, “Iterated extended kalman filter based visual-inertial odometry using direct photometric feedback,” Int. J. Rob. Res., vol. 36, no. 10, pp. 1053–1072, 2017.
  4. S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual-inertial odometry using nonlinear optimization,” Int. J. Rob. Res., vol. 34, no. 3, pp. 314–334, 2015.
  5. A. Rosinol, M. Abate, Y. Chang, and L. Carlone, “Kimera: an open-source library for real-time metric-semantic localization and mapping,” in Proc. IEEE Int. Conf. Robot. Automat., 2020, pp. 1689–1696.
  6. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Trans. Robot., vol. 34, no. 4, pp. 1004–1020, 2018.
  7. C. Campos, R. Elvira, J. J. Gomez Rodriguez, J. M. M. Montiel, and J. D. Tardos, “Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam,” IEEE Trans. Robot., vol. 37, no. 6, pp. 1874–1890, 2021.
  8. V. Usenko, N. Demmel, D. Schubert, J. Stueckler, and D. Cremers, “Visual-inertial mapping with non-linear factor recovery,” IEEE Rob. Autom. Lett., vol. 5, no. 2, pp. 422–429, 2020.
  9. G. Sibley, L. Matthies, and G. Sukhatme, “Sliding window filter with application to planetary landing,” J. Field Robot, vol. 27, no. 5, SI, pp. 587–608, 2010.
  10. S. Agarwal, K. Mierle, and Others, “Ceres solver,” http://ceres-solver.org.
  11. G. Grisetti, R. Kümmerle, H. Strasdat, and K. Konolige, “g2o: A general framework for (hyper) graph optimization,” in Proc. IEEE Int. Conf. Robot. Automat., 2011, pp. 9–13.
  12. M. Li and A. I. Mourikis, “High-precision, consistent ekf-based visual-inertial odometry,” Int. J. Rob. Res., vol. 32, no. 6, pp. 690–711, 2013.
  13. K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, and V. Kumar, “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robot. Autom. Lett., vol. 3, no. 2, pp. 965–972, 2018.
  14. K. J. Wu, A. M. Ahmed, G. A. Georgiou, and S. Roumeliotis, I, “A square root inverse filter for efficient vision-aided inertial navigation on mobile devices,” in Proc. Robot., Sci. Syst., 2015.
  15. M. K. Paul, K. Wu, J. A. Hesch, E. D. Nerurkar, and S. I. Roumeliotis, “A comparative analysis of tightly-coupled monocular, binocular, and stereo vins,” in Proc. IEEE Int. Conf. Robot. Automat., 2017, pp. 165–172.
  16. Q. Cai, Y. Wu, L. Zhang, and P. Zhang, “Equivalent constraints for two-view geometry: Pose solution/pure rotation identification and 3d reconstruction,” Int. J. Comput. Vis., vol. 127, no. 2, pp. 163–180, FEB 2019.
  17. Q. Cai, L. Zhang, Y. Wu, W. Yu, and D. Hu, “A pose-only solution to visual reconstruction and navigation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 73–86, 2023.
  18. Y. Ge, L. Zhang, Y. Wu, and D. Hu, “Pipo-slam: Lightweight visual-inertial slam with preintegration merging theory and pose-only descriptions of multiple view geometry,” IEEE Trans. Robot., pp. 1–14, 2024.
  19. J. Delmerico and D. Scaramuzza, “A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots,” in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 2502–2509.
  20. D. Nistér, O. Naroditsky, and J. Bergen, “Visual odometry,” in IEEE Conf. Comput. Vis. Pattern Recognit., 2004, pp. 652–659.
  21. B. Kitt, A. Geiger, and H. Lategahn, “Visual odometry based on stereo image sequences with ransac-based outlier rejection scheme,” in 2010 IEEE Intelligent Vehicles Symposium, 2010, pp. 486–492.
  22. C. Forster, M. Pizzoli, and D. Scaramuzza, “Svo: Fast semi-direct monocular visual odometry,” in Proc. IEEE Int. Conf. Robot. Automat., 2014, pp. 15–22.
  23. J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 3, pp. 611–625, 2018.
  24. G. P. H. I. M. I. Roumeliotis, “A first-estimates jacobian ekf for improving slam consistency,” in Proc. of the 11th International Symposium on Experimental Robotics, 2009, pp. 373–382.
  25. A. Barrau and S. Bonnabel, “Invariant kalman filtering,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 1, no. 1, pp. 237–257, 2018.
  26. P. van Goor and R. Mahony, “Eqvio: An equivariant filter for visual-inertial odometry,” IEEE Trans. Robot., vol. 39, no. 5, pp. 3567–3585, 2023.
  27. V. Indelman, P. Gurfil, E. Rivlin, and H. Rotstein, “Real-time vision-aided localization and navigation based on three-view geometry,” IEEE Trans. Aerosp. Electron. Syst., vol. 48, no. 3, pp. 2239–2259, 2012.
  28. T. Lupton and S. Sukkarieh, “Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions,” IEEE Trans. Robot., vol. 28, no. 1, pp. 61–76, 2012.
  29. L. von Stumberg, V. Usenko, and D. Cremers, “Direct sparse visual-inertial odometry using dynamic marginalization,” in Proc. IEEE Int. Conf. Robot. Automat., 2018, pp. 2510–2517.
  30. C. Forster, Z. Zhang, M. Gassner, M. Werlberger, and D. Scaramuzza, “Svo: Semidirect visual odometry for monocular and multicamera systems,” IEEE Trans. Robot., vol. 33, no. 2, pp. 249–265, 2017.
  31. S. Agarwal, N. Snavely, S. M. Seitz, and R. Szeliski, “Bundle adjustment in the large,” in Eur. Conf. Comput. Vis., 2010, p. 29–42.
  32. M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” Int. J. Rob. Res., vol. 35, no. 10, pp. 1157–1163, 2016.
  33. A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2012, pp. 3354–3361.
  34. E. Rosten, R. Porter, and T. Drummond, “Faster and better: A machine learning approach to corner detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 1, pp. 105–119, 2010.
  35. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the 7th International Joint Conference on Artificial Intelligence, 1981, p. 674–679.
  36. L. Zhang, W. Wu, and M. Wang, “Rapid sins two-position ground alignment scheme based on piecewise combined kalman filter and azimuth constraint information,” Sensors-Basel, vol. 19, no. 5, 2019.
  37. Z. Zhang and D. Scaramuzza, “A tutorial on quantitative trajectory evaluation for visual-inertial odometry,” in IEEE/RSJ Int. Conf. Intell. Robots Syst., 2018, pp. 7244–7251.

Summary

We haven't generated a summary for this paper yet.