Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PVI-DSO: Leveraging Planar Regularities for Direct Sparse Visual-Inertial Odometry (2204.02635v2)

Published 6 Apr 2022 in cs.RO

Abstract: The monocular visual-inertial odometry (VIO) based on the direct method can leverage all available pixels in the image to simultaneously estimate the camera motion and reconstruct the denser map of the scene in real time. However, the direct method is sensitive to photometric changes, which can be compensated by introducing geometric information in the environment. In this paper, we propose a monocular direct sparse visual-inertial odometry, which exploits the planar regularities (PVI-DSO). Our system detects the planar regularities from the 3D mesh built on the estimated map points. To improve the pose estimation accuracy with the geometric information, a tightly coupled coplanar constraint expression is used to express photometric error in the direct method. Additionally, to improve the optimization efficiency, we elaborately derive the analytical Jacobian of the linearization form for the coplanar constraint. Finally, the inertial measurement error, coplanar point photometric error, non-coplanar photometric error, and prior error are added into the optimizer, which simultaneously improves the pose estimation accuracy and mesh itself. We verified the performance of the whole system on simulation and real-world datasets. Extensive experiments have demonstrated that our system outperforms the state-of-the-art counterparts.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004–1020, 2018.
  2. C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, 2021.
  3. A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in Proceedings 2007 IEEE International Conference on Robotics and Automation.   IEEE, 2007, pp. 3565–3572.
  4. J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, pp. 611–625, 2017.
  5. L. Von Stumberg, V. Usenko, and D. Cremers, “Direct sparse visual-inertial odometry using dynamic marginalization,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 2510–2517.
  6. Y. He, J. Zhao, Y. Guo, W. He, and K. Yuan, “Pl-vio: Tightly-coupled monocular visual–inertial odometry using point and line features,” Sensors, vol. 18, no. 4, p. 1159, 2018.
  7. A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer, “Pl-slam: Real-time monocular visual slam with points and lines,” in 2017 IEEE international conference on robotics and automation (ICRA).   IEEE, 2017, pp. 4503–4508.
  8. H. Zhou, D. Zou, L. Pei, R. Ying, P. Liu, and W. Yu, “StructSLAM: Visual SLAM with building structure lines,” IEEE Transactions on Vehicular Technology, vol. 64, no. 4, pp. 1364–1375, 2015.
  9. S.-J. Li, B. Ren, Y. Liu, M.-M. Cheng, D. Frost, and V. A. Prisacariu, “Direct line guidance odometry,” in 2018 IEEE international conference on Robotics and automation (ICRA).   IEEE, 2018, pp. 5137–5143.
  10. Z. Yu, J. Zheng, D. Lian, Z. Zhou, and S. Gao, “Single-image piece-wise planar 3d reconstruction via associative embedding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 1029–1037.
  11. C. Liu, K. Kim, J. Gu, Y. Furukawa, and J. Kautz, “Planercnn: 3d plane detection and reconstruction from a single image,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4450–4459.
  12. A. Rosinol, T. Sattler, M. Pollefeys, and L. Carlone, “Incremental visual-inertial 3d mesh generation with structural regularities,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 8220–8226.
  13. J. Wu, J. Xiong, and H. Guo, “Enforcing regularities between planes using key plane for monocular mesh-based vio,” Journal of Intelligent & Robotic Systems, vol. 104, no. 1, p. 6, 2022.
  14. X. Li, Y. He, J. Lin, and X. Liu, “Leveraging planar regularities for point line visual-inertial odometry,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2020, pp. 5120–5127.
  15. R. A. Newcombe and A. J. Davison, “Live dense reconstruction with a single moving camera,” in 2010 IEEE computer society conference on computer vision and pattern recognition.   IEEE Computer Society, 2010, pp. 1498–1505.
  16. R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “Dtam: Dense tracking and mapping in real-time,” in 2011 international conference on computer vision.   IEEE, 2011, pp. 2320–2327.
  17. F. Wu and G. Beltrame, “Direct sparse odometry with planes,” IEEE Robotics and Automation Letters, vol. 7, no. 1, pp. 557–564, 2021.
  18. C. Liu, J. Yang, D. Ceylan, E. Yumer, and Y. Furukawa, “Planenet: Piece-wise planar reconstruction from a single rgb image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2579–2588.
  19. A. Concha and J. Civera, “Dpptam: Dense piecewise planar tracking and mapping from a monocular sequence,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2015, pp. 5686–5693.
  20. J. Engel, T. Schöps, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European conference on computer vision.   Springer, 2014, pp. 834–849.
  21. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “Slic superpixels compared to state-of-the-art superpixel methods,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 11, pp. 2274–2282, 2012.
  22. X. Li, Y. Li, E. P. Örnek, J. Lin, and F. Tombari, “Co-planar parametrization for stereo-slam and visual-inertial odometry,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6972–6979, 2020.
  23. M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  24. D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler, and D. Cremers, “The tum vi benchmark for evaluating visual-inertial odometry,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 1680–1687.
  25. C. Forster, M. Pizzoli, and D. Scaramuzza, “Svo: Fast semi-direct monocular visual odometry,” in 2014 IEEE international conference on robotics and automation (ICRA).   IEEE, 2014, pp. 15–22.
  26. F. Zheng, G. Tsai, Z. Zhang, S. Liu, C.-C. Chu, and H. Hu, “Trifo-vio: Robust and efficient stereo visual inertial odometry using points and lines,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 3686–3693.
  27. D. Zou, Y. Wu, L. Pei, H. Ling, and W. Yu, “Structvio: visual-inertial odometry with structural regularity of man-made environments,” IEEE Transactions on Robotics, vol. 35, no. 4, pp. 999–1013, 2019.
  28. B. Xu, P. Wang, Y. He, Y. Chen, Y. Chen, and M. Zhou, “Leveraging structural information to improve point line visual-inertial odometry,” arXiv preprint arXiv:2105.04064, 2021.
  29. R. F. Salas-Moreno, B. Glocken, P. H. Kelly, and A. J. Davison, “Dense planar slam,” in 2014 IEEE international symposium on mixed and augmented reality (ISMAR).   IEEE, 2014, pp. 157–164.
  30. X. Zhang, W. Wang, X. Qi, Z. Liao, and R. Wei, “Point-plane slam using supposed planes for indoor environments,” Sensors, vol. 19, no. 17, p. 3795, 2019.
  31. L. Ma, C. Kerl, J. Stückler, and D. Cremers, “Cpa-slam: Consistent plane-model alignment for direct rgb-d slam,” in 2016 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2016, pp. 1285–1291.
  32. J. Li, B. Yang, K. Huang, G. Zhang, and H. Bao, “Robust and efficient visual-inertial odometry with multi-plane priors,” in Chinese Conference on Pattern Recognition and Computer Vision (PRCV).   Springer, 2019, pp. 283–295.
  33. L. von Stumberg and D. Cremers, “Dm-vio: Delayed marginalization visual-inertial odometry,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 1408–1415, 2022.
  34. S. Yang and S. Scherer, “Direct monocular odometry using points and lines,” in 2017 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2017, pp. 3871–3877.
  35. R. Gomez-Ojeda, J. Briales, and J. Gonzalez-Jimenez, “Pl-svo: Semi-direct monocular visual odometry by combining points and line segments,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2016, pp. 4211–4216.
  36. L. Zhou, S. Wang, and M. Kaess, “Dplvo: Direct point-line monocular visual odometry,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7113–7120, 2021.
  37. F. Cheng, C. Liu, H. Wu, and M. Ai, “Direct sparse visual odometry with structural regularities for long corridor environments,” The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 43, pp. 757–763, 2020.
  38. C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “Imu preintegration on manifold for efficient visual-inertial maximum-a-posteriori estimation.”   Georgia Institute of Technology, 2015.
  39. V. Usenko, N. Demmel, D. Schubert, J. Stückler, and D. Cremers, “Visual-inertial mapping with non-linear factor recovery,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 422–429, 2019.
  40. G. P. Huang, A. I. Mourikis, and S. I. Roumeliotis, “A first-estimates jacobian ekf for improving slam consistency,” in Experimental Robotics.   Springer, 2009, pp. 373–382.
Citations (8)

Summary

We haven't generated a summary for this paper yet.