Papers
Topics
Authors
Recent
2000 character limit reached

A Robust and Efficient Visual-Inertial Initialization with Probabilistic Normal Epipolar Constraint (2410.19473v2)

Published 25 Oct 2024 in cs.RO

Abstract: Accurate and robust initialization is essential for Visual-Inertial Odometry (VIO), as poor initialization can severely degrade pose accuracy. During initialization, it is crucial to estimate parameters such as accelerometer bias, gyroscope bias, initial velocity, gravity, etc. Most existing VIO initialization methods adopt Structure from Motion (SfM) to solve for gyroscope bias. However, SfM is not stable and efficient enough in fast-motion or degenerate scenes. To overcome these limitations, we extended the rotation-translation-decoupled framework by adding new uncertainty parameters and optimization modules. First, we adopt a gyroscope bias estimator that incorporates probabilistic normal epipolar constraints. Second, we fuse IMU and visual measurements to solve for velocity, gravity, and scale efficiently. Finally, we design an additional refinement module that effectively reduces gravity and scale errors. Extensive EuRoC dataset tests show that our method reduces gyroscope bias and rotation errors by 16\% and 4\% on average, and gravity error by 29\% on average. On the TUM dataset, our method reduces the gravity error and scale error by 14.2\% and 5.7\% on average respectively. The source code is available at https://github.com/MUCS714/DRT-PNEC.git

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. W. Fang, L. Zheng, H. Deng, and H. Zhang, “Real-time motion tracking for mobile augmented/virtual reality using adaptive visual-inertial fusion,” Sensors, vol. 17, no. 5, p. 1037, 2017.
  2. P. Li, T. Qin, B. Hu, F. Zhu, and S. Shen, “Monocular visual-inertial state estimation for mobile augmented reality,” in 2017 IEEE international symposium on mixed and augmented reality (ISMAR).   IEEE, 2017, pp. 11–21.
  3. J.-C. Piao and S.-D. Kim, “Adaptive monocular visual–inertial slam for real-time augmented reality applications in mobile devices,” Sensors, vol. 17, no. 11, p. 2567, 2017.
  4. K. Sun, K. Mohta, B. Pfrommer, M. Watterson, S. Liu, Y. Mulgaonkar, C. J. Taylor, and V. Kumar, “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 965–972, 2018.
  5. Z. Yang, F. Gao, and S. Shen, “Real-time monocular dense mapping on aerial robots using visual-inertial fusion,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 4552–4559.
  6. C. Campos, J. M. Montiel, and J. D. Tardós, “Inertial-only optimization for visual-inertial initialization,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 51–57.
  7. A. Martinelli, “Closed-form solution of visual-inertial structure from motion,” International journal of computer vision, vol. 106, no. 2, pp. 138–152, 2014.
  8. J. Kaiser, A. Martinelli, F. Fontana, and D. Scaramuzza, “Simultaneous state initialization and gyroscope bias calibration in visual inertial aided navigation,” IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 18–25, 2016.
  9. J. Domínguez-Conti, J. Yin, Y. Alami, and J. Civera, “Visual-inertial slam initialization: A general linear formulation and a gravity-observing non-linear optimization,” in 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).   IEEE, 2018, pp. 37–45.
  10. T. Qin and S. Shen, “Robust initialization of monocular visual-inertial estimation on aerial robots,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 4225–4232.
  11. R. Mur-Artal and J. D. Tardós, “Visual-inertial monocular slam with map reuse,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 796–803, 2017.
  12. L. Kneip and S. Lynen, “Direct optimization of frame-to-frame rotation,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 2352–2359.
  13. Y. He, B. Xu, Z. Ouyang, and H. Li, “A rotation-translation-decoupled solution for robust and efficient visual-inertial initialization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 739–748.
  14. Q. Cai, L. Zhang, Y. Wu, W. Yu, and D. Hu, “A pose-only solution to visual reconstruction and navigation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 1, pp. 73–86, 2021.
  15. C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
  16. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  17. P. Geneva, K. Eckenhoff, W. Lee, Y. Yang, and G. Huang, “Openvins: A research platform for visual-inertial estimation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 4666–4672.
  18. J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, pp. 611–625, 2017.
  19. R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE transactions on robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
  20. D. Zuniga-Noël, F.-A. Moreno, and J. Gonzalez-Jimenez, “An analytical solution to the imu initialization problem for visual-inertial systems,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 6116–6122, 2021.
  21. C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-manifold preintegration for real-time visual–inertial odometry,” IEEE Transactions on Robotics, vol. 33, no. 1, pp. 1–21, 2016.
  22. L. Kneip, R. Siegwart, and M. Pollefeys, “Finding the exact rotation between two images independently of the translation,” in Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI 12.   Springer, 2012, pp. 696–709.
  23. D. Muhle, L. Koestler, N. Demmel, F. Bernard, and D. Cremers, “The probabilistic normal epipolar constraint for frame-to-frame rotation optimization under uncertain feature positions,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1819–1828.
  24. J. K. Uhlmann, “Dynamic map building and localization: New theoretical foundations,” Ph.D. dissertation, University of Oxford Oxford, 1995.
  25. C. L. Lawson, “Contribution to the theory of linear least maximum approximation,” Ph. D. dissertation. Univ. Calif., 1961.
  26. M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  27. S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 13, no. 04, pp. 376–380, 1991.
  28. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in IJCAI’81: 7th international joint conference on Artificial intelligence, vol. 2, 1981, pp. 674–679.
  29. J. Shi et al., “Good features to track,” in 1994 Proceedings of IEEE conference on computer vision and pattern recognition.   IEEE, 1994, pp. 593–600.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.