Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RHAML: Rendezvous-based Hierarchical Architecture for Mutual Localization (2405.11726v1)

Published 20 May 2024 in cs.RO

Abstract: Mutual localization serves as the foundation for collaborative perception and task assignment in multi-robot systems. Effectively utilizing limited onboard sensors for mutual localization between marker-less robots is a worthwhile goal. However, due to inadequate consideration of large scale variations of the observed robot and localization refinement, previous work has shown limited accuracy when robots are equipped only with RGB cameras. To enhance the precision of localization, this paper proposes a novel rendezvous-based hierarchical architecture for mutual localization (RHAML). Firstly, to learn multi-scale robot features, anisotropic convolutions are introduced into the network, yielding initial localization results. Then, the iterative refinement module with rendering is employed to adjust the observed robot poses. Finally, the pose graph is conducted to globally optimize all localization results, which takes into account multi-frame observations. Therefore, a flexible architecture is provided that allows for the selection of appropriate modules based on requirements. Simulations demonstrate that RHAML effectively addresses the problem of multi-robot mutual localization, achieving translation errors below 2 cm and rotation errors below 0.5 degrees when robots exhibit 5 m of depth variation. Moreover, its practical utility is validated by applying it to map fusion when multi-robots explore unknown environments.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. C. Cao, H. Zhu, Z. Ren, H. Choset, and J. Zhang, “Representation granularity enables time-efficient autonomous exploration in large, complex worlds,” Sci. Robot., vol. 8, no. 80, p. eadf0970, 2023.
  2. F. Pasqualetti, A. Franchi, and F. Bullo, “On Cooperative Patrolling: Optimal trajectories, complexity analysis, and approximation algorithms,” IEEE Trans. Robotics, vol. 28, no. 3, pp. 592–606, 2012.
  3. J. Hu, W. Liu, H. Zhang, J. Yi, and Z. Xiong, “Multi-robot object transport motion planning with a deformable sheet,” IEEE Robot. Automat. Lett., vol. 7, no. 4, pp. 9350–9357, 2022.
  4. S. Dong, K. Xu, Q. Zhou, A. Tagliasacchi, S. Xin, M. Nießner, and B. Chen, “Multi-robot collaborative dense scene reconstruction,” ACM Trans. Graph., vol. 38, jul 2019.
  5. Y. Wang, X. Wen, L. Yin, C. Xu, Y. Cao, and F. Gao, “Certifiably optimal mutual localization with anonymous bearing measurements,” IEEE Robot. Automat. Lett., vol. 7, no. 4, pp. 9374–9381, 2022.
  6. X. S. Zhou and S. I. Roumeliotis, “Determining 3-d relative transformations for any combination of range and bearing measurements,” IEEE Trans. Robotics, vol. 29, no. 2, pp. 458–474, 2013.
  7. J. Liu and G. Hu, “Relative localization estimation for multiple robots via the rotating Ultra-Wideband tag,” IEEE Robot. Automat. Lett., vol. 8, no. 7, pp. 4187–4194, 2023.
  8. Y. Jang, C. Oh, Y. Lee, and H. J. Kim, “Multirobot collaborative monocular slam utilizing rendezvous,” IEEE Trans. Robotics, vol. 37, no. 5, pp. 1469–1486, 2021.
  9. P. Schmuck and M. Chli, “CCM-SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams,” J. Field Robotics, vol. 36, no. 4, pp. 763–781, 2019.
  10. Z. Zhang, J. Yu, J. Tang, Y. Xu, and Y. Wang, “MR-TopoMap: Multi-robot exploration based on topological map in communication restricted environment,” IEEE Robot. Automat. Lett., vol. 7, no. 4, pp. 10794–10801, 2022.
  11. Y. Tian, Y. Chang, F. Herrera Arias, C. Nieto-Granda, J. P. How, and L. Carlone, “Kimera-Multi: Robust, distributed, dense metric-semantic slam for multi-robot systems,” IEEE Trans. Robotics, vol. 38, no. 4, pp. 2022–2038, 2022.
  12. J. Kim, D.-S. Han, and B.-T. Zhang, “Robust map fusion with visual attention utilizing multi-agent rendezvous,” in Proc. IEEE Int. Conf. Robot. Autom., pp. 2062–2068, 2023.
  13. S. Bonato, S. C. Lambertenghi, E. Cereda, A. Giusti, and D. Palossi, “Ultra-low power deep learning-based monocular relative localization onboard nano-quadrotors,” in Proc. IEEE Int. Conf. Robot. Autom., pp. 3411–3417, 2023.
  14. J. Tremblay, T. To, B. Sundaralingam, Y. Xiang, D. Fox, and S. Birchfield, “Deep object pose estimation for semantic robotic grasping of household objects,” in Proc. Conf. Robot. Learn., 2018.
  15. S. Bultmann, R. Memmesheimer, and S. Behnke, “External camera-based mobile robot pose estimation for collaborative perception with smart edge sensors,” in Proc. IEEE Int. Conf. Robot. Autom., pp. 8194–8200, 2023.
  16. Y. Li, G. Wang, X. Ji, Y. Xiang, and D. Fox, “DeepIM: Deep iterative matching for 6d pose estimation,” Int. J. Comp. Vis., vol. 128, no. 3, pp. 657–678, 2020.
  17. S. Garg, N. Suenderhauf, and M. Milford, “Semantic–geometric visual place recognition: a new perspective for reconciling opposing views,” Int. J. Robot. Res., vol. 41, no. 6, pp. 573–598, 2022.
  18. P. Yin, S. Zhao, H. Lai, R. Ge, J. Zhang, H. Choset, and S. Scherer, “AutoMerge: A framework for map assembling and smoothing in city-scale environments,” IEEE Trans. Robotics, vol. 39, no. 5, pp. 3686–3704, 2023.
  19. P. Zhang, G. Chen, Y. Li, and W. Dong, “Agile formation control of drone flocking enhanced with active vision-based relative localization,” IEEE Robot. Automat. Lett., vol. 7, no. 3, pp. 6359–6366, 2022.
  20. Y. Wang, X. Wen, Y. Cao, C. Xu, and F. Gao, “Bearing-based relative localization for robotic swarm with partially mutual observations,” IEEE Robot. Automat. Lett., vol. 8, no. 4, pp. 2142–2149, 2023.
  21. F. Schilling, F. Schiano, and D. Floreano, “Vision-based drone flocking in outdoor environments,” IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 2954–2961, 2021.
  22. M. Vrba and M. Saska, “Marker-less micro aerial vehicle detection and localization using convolutional neural networks,” IEEE Robot. Automat. Lett., vol. 5, no. 2, pp. 2459–2466, 2020.
  23. S. Li, C. De Wagter, and G. C. H. E. De Croon, “Self-supervised monocular multi-robot relative localization with efficient deep neural networks,” in Proc. IEEE Int. Conf. Robot. Autom., pp. 9689–9695, 2022.
  24. L. Crupi, A. Giusti, and D. Palossi, “High-throughput visual nano-drone to nano-drone relative localization using onboard fully convolutional networks,” arXiv preprint arXiv:2402.13756, 2024.
  25. S. Peng, X. Zhou, Y. Liu, H. Lin, Q. Huang, and H. Bao, “PVNet: Pixel-wise voting network for 6dof object pose estimation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 44, no. 6, pp. 3212–3223, 2022.
  26. J. Sun, Z. Wang, S. Zhang, X. He, H. Zhao, G. Zhang, and X. Zhou, “OnePose: One-Shot Object Pose Estimation without CAD Models,” in Proc. IEEE Conf. Comp. Vis. Pattern Recog., pp. 6815–6824, 2022.
  27. Y. Labbé, J. Carpentier, M. Aubry, and J. Sivic, “CosyPose: Consistent Multi-view Multi-object 6D Pose Estimation,” in Proc. Europ. Conf. Comp. Vis., pp. 574–591, Springer International Publishing, 2020.
  28. J. Li, P. Wang, K. Han, and Y. Liu, “Anisotropic convolutional neural networks for RGB-D based semantic scene completion,” IEEE Trans. Pattern Anal. Machine Intell., vol. 44, no. 11, pp. 8125–8138, 2022.
  29. W. Yu, P. Zhou, S. Yan, and X. Wang, “InceptionNeXt: when inception meets convnext,” arXiv preprint arXiv:2303.16900, 2023.
  30. W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2d lidar slam,” in Proc. IEEE Int. Conf. Robot. Autom., pp. 1271–1278, 2016.
Citations (1)

Summary

We haven't generated a summary for this paper yet.