Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 175 tok/s Pro
GPT OSS 120B 454 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Towards Aerial Collaborative Stereo: Real-Time Cross-Camera Feature Association and Relative Pose Estimation for UAVs (2402.17504v2)

Published 27 Feb 2024 in cs.RO

Abstract: The collaborative visual perception of multiple Unmanned Aerial Vehicles (UAVs) has increasingly become a research hotspot. Compared to a single UAV equipped with a short-baseline stereo camera, multi-UAV collaborative vision offers a wide and variable baseline, providing potential benefits in flexible and large-scale depth perception. In this paper, we propose the concept of a collaborative stereo camera, where the left and right cameras are mounted on two UAVs that share an overlapping FOV. Considering the dynamic flight of two UAVs in the real world, the FOV and relative pose of the left and right cameras are continuously changing. Compared to fixed-baseline stereo cameras, this aerial collaborative stereo system introduces two challenges, which are highly real-time requirements for dynamic cross-camera stereo feature association and relative pose estimation of left and right cameras. To address these challenges, we first propose a real-time dual-channel feature association algorithm with a guidance-prediction structure. Then, we propose a Relative Multi-State Constrained Kalman Filter (Rel-MSCKF) algorithm to estimate the relative pose by fusing co-visual features and UAVs' visual-inertial odometry (VIO). Extensive experiments are performed on the popular onboard computer NVIDIA NX. Results on the resource-constrained platform show that the real-time performance of the dual-channel feature association is significantly superior to traditional methods. The convergence of Rel-MSCKF is assessed under different initial baseline errors. In the end, we present a potential application of aerial collaborative stereo for remote mapping obstacles in urban scenarios. We hope this work can serve as a foundational study for more multi-UAV collaborative vision research. Online video: https://youtu.be/avxMuOf5Qcw

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Y. Jang, C. Oh, Y. Lee, and H. J. Kim, “Multirobot collaborative monocular slam utilizing rendezvous,” IEEE Transactions on Robotics, vol. 37, no. 5, pp. 1469–1486, 2021.
  2. C. Forster, S. Lynen, L. Kneip, and D. Scaramuzza, “Collaborative monocular slam with multiple micro aerial vehicles,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2013, pp. 3962–3970.
  3. P. Schmuck and M. Chli, “Ccm-slam: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams,” Journal of Field Robotics, vol. 36, no. 4, pp. 763–781, 2019.
  4. Y. Tian, Y. Chang, F. H. Arias, C. Nieto-Granda, J. P. How, and L. Carlone, “Kimera-multi: Robust, distributed, dense metric-semantic slam for multi-robot systems,” IEEE Transactions on Robotics, vol. 38, no. 4, 2022.
  5. P. Schmuck, T. Ziegler, M. Karrer, J. Perraudin, and M. Chli, “Covins: Visual-inertial slam for centralized collaboration,” in 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct).   IEEE, 2021, pp. 171–176.
  6. M. Karrer and M. Chli, “Towards globally consistent visual-inertial collaborative slam,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 3685–3692.
  7. M. W. Achtelik, S. Weiss, M. Chli, F. Dellaerty, and R. Siegwart, “Collaborative stereo,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2011, pp. 2242–2248.
  8. M. Karrer and M. Chli, “Distributed variable-baseline stereo slam from two uavs,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 82–88.
  9. T. Ziegler, M. Karrer, P. Schmuck, and M. Chli, “Distributed formation estimation via pairwise distance measurements,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3017–3024, 2021.
  10. M. B. Alatise and G. P. Hancke, “A review on challenges of autonomous mobile robot and sensor fusion methods,” IEEE Access, vol. 8, pp. 39 830–39 846, 2020.
  11. M. Alatise and G. P. Hancke, “Pose estimation of a mobile robot using monocular vision and inertial sensors data,” in 2017 IEEE AFRICON.   IEEE, 2017, pp. 1552–1557.
  12. H. Xu, Y. Zhang, B. Zhou, L. Wang, X. Yao, G. Meng, and S. Shen, “Omni-swarm: A decentralized omnidirectional visual–inertial–uwb state estimation system for aerial swarms,” IEEE Transactions on Robotics, vol. 38, no. 6, pp. 3374–3394, 2022.
  13. H. Xu, P. Liu, X. Chen, and S. Shen, “D^2 slam: Decentralized and Distributed Collaborative Visual-inertial SLAM System for Aerial Swarm,” 2022.
  14. D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 224–236.
  15. M. B. Alatise and G. P. Hancke, “Pose estimation of a mobile robot based on fusion of imu data and vision data using an extended kalman filter,” Sensors, vol. 17, no. 10, p. 2164, 2017.
  16. M. Karrer, M. Agarwal, M. Kamel, R. Siegwart, and M. Chli, “Collaborative 6dof relative pose estimation for two uavs with overlapping fields of view,” in 2018 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2018, pp. 6688–6693.
  17. M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  18. K. Xu, Y. Hao, C. Wang, and L. Xie, “Airvo: An illumination-robust point-line visual odometry,” arXiv preprint arXiv:2212.07595, 2022.
  19. D. Zou and P. Tan, “Coslam: Collaborative visual slam in dynamic environments,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 2, pp. 354–366, 2012.
  20. P.-E. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superglue: Learning feature matching with graph neural networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 4938–4947.
  21. A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner, “Scannet: Richly-annotated 3d reconstructions of indoor scenes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 5828–5839.
  22. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International Conference on Computer Vision.   IEEE, 2011, pp. 2564–2571.
  23. L. David, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, pp. 91–110, 2004.
  24. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” in Computer Vision–ECCV 2006: 9th European Conference on Computer Vision.   Springer, 2006, pp. 404–417.
  25. K. G. Derpanis, “Overview of the ransac algorithm,” Image Rochester NY, vol. 4, no. 1, pp. 2–3, 2010.
  26. K. M. Yi, E. Trulls, Y. Ono, V. Lepetit, M. Salzmann, and P. Fua, “Learning to find good correspondences,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2666–2674.
  27. J. Zhang, D. Sun, Z. Luo, A. Yao, L. Zhou, T. Shen, Y. Chen, L. Quan, and H. Liao, “Learning two-view correspondences and geometry using order-aware network,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5845–5854.
  28. D. Bojanić, K. Bartol, T. Pribanić, T. Petković, Y. D. Donoso, and J. S. Mas, “On the comparison of classic and deep keypoint detector and descriptor methods,” in 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA).   IEEE, 2019, pp. 64–69.
  29. A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint kalman filter for vision-aided inertial navigation,” in Proceedings 2007 IEEE international conference on robotics and automation.   IEEE, 2007, pp. 3565–3572.
  30. R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Transactions on pattern analysis and machine intelligence, vol. 19, no. 6, pp. 580–593, 1997.
  31. P. Geneva, K. Eckenhoff, W. Lee, Y. Yang, and G. Huang, “Openvins: A research platform for visual-inertial estimation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2020, pp. 4666–4672.
  32. W. Dong, G.-Y. Gu, X. Zhu, and H. Ding, “High-performance trajectory tracking control of a quadrotor with disturbance observer,” Sensors and Actuators A: Physical, vol. 211, pp. 67–77, 2014.
  33. D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler, and D. Cremers, “The tum vi benchmark for evaluating visual-inertial odometry. in 2018 ieee,” in RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1680–1687.
  34. T.-M. Nguyen, S. Yuan, M. Cao, Y. Lyu, T. H. Nguyen, and L. Xie, “Ntu viral: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint,” The International Journal of Robotics Research, vol. 41, no. 3, pp. 270–280, 2022.
  35. Z. Wang, S. Liu, G. Chen, and W. Dong, “Robust Visual Positioning of the UAV for the Under Bridge Inspection With a Ground Guided Vehicle,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–10, 2022.
  36. G. Chen, D. Sun, W. Dong, X. Sheng, X. Zhu, and H. Ding, “Computationally efficient trajectory planning for high speed obstacle avoidance of a quadrotor with active sensing,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3365–3372, 2021.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.