Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AMCO: Adaptive Multimodal Coupling of Vision and Proprioception for Quadruped Robot Navigation in Outdoor Environments (2403.13235v1)

Published 20 Mar 2024 in cs.RO

Abstract: We present AMCO, a novel navigation method for quadruped robots that adaptively combines vision-based and proprioception-based perception capabilities. Our approach uses three cost maps: general knowledge map; traversability history map; and current proprioception map; which are derived from a robot's vision and proprioception data, and couples them to obtain a coupled traversability cost map for navigation. The general knowledge map encodes terrains semantically segmented from visual sensing, and represents a terrain's typically expected traversability. The traversability history map encodes the robot's recent proprioceptive measurements on a terrain and its semantic segmentation as a cost map. Further, the robot's present proprioceptive measurement is encoded as a cost map in the current proprioception map. As the general knowledge map and traversability history map rely on semantic segmentation, we evaluate the reliability of the visual sensory data by estimating the brightness and motion blur of input RGB images and accordingly combine the three cost maps to obtain the coupled traversability cost map used for navigation. Leveraging this adaptive coupling, the robot can depend on the most reliable input modality available. Finally, we present a novel planner that selects appropriate gaits and velocities for traversing challenging outdoor environments using the coupled traversability cost map. We demonstrate AMCO's navigation performance in different real-world outdoor environments and observe 10.8%-34.9% reduction w.r.t. two stability metrics, and up to 50% improvement in terms of success rate compared to current navigation methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. J. Frey, M. Mattamala, N. Chebrolu, C. Cadena, M. Fallon, and M. Hutter, “Fast traversability estimation for wild visual navigation,” arXiv preprint arXiv:2305.08510, 2023.
  2. A. J. Sathyamoorthy, K. Weerakoon, T. Guan, M. Russell, D. Conover, J. Pusey, and D. Manocha, “Vern: Vegetation-aware robot navigation in dense unstructured outdoor environments,” arXiv preprint arXiv:2303.14502, 2023.
  3. S. Sotnik and V. Lyashenko, “Agricultural robotic platforms,” 2022.
  4. G. Valsecchi, C. Weibel, H. Kolvenbach, and M. Hutter, “Towards legged locomotion on steep planetary terrain,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2023, pp. 786–792.
  5. L. Bruzzone and P. Fanghella, “Functional redesign of mantis 2.0, a hybrid leg-wheel robot for surveillance and inspection,” Journal of Intelligent & Robotic Systems, vol. 81, pp. 215–230, 2016.
  6. T. Yoshiike, M. Kuroda, R. Ujino, H. Kaneko, H. Higuchi, S. Iwasaki, Y. Kanemoto, M. Asatani, and T. Koshiishi, “Development of experimental legged robot for inspection and disaster response in plants,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2017, pp. 4869–4876.
  7. J. Sun, Y. Meng, J. Tan, C. Sun, J. Zhang, N. Ding, H. Qian, and A. Zhang, “A vision-based perception framework for outdoor navigation tasks applicable to legged robots,” in 2017 Chinese Automation Congress (CAC).   IEEE, 2017, pp. 2894–2899.
  8. T. Guan, D. Kothandaraman, R. Chandra, A. J. Sathyamoorthy, K. Weerakoon, and D. Manocha, “Ga-nav: Efficient terrain segmentation for robot navigation in unstructured outdoor environments,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8138–8145, 2022.
  9. A. Li, C. Yang, J. Frey, J. Lee, C. Cadena, and M. Hutter, “Seeing through the grass: Semantic pointcloud filter for support surface learning,” arXiv preprint arXiv:2305.07995, 2023.
  10. A. J. Sathyamoorthy, K. Weerakoon, M. Elnoor, and D. Manocha, “Using lidar intensity for robot navigation,” arXiv preprint arXiv:2309.07014, 2023.
  11. S. Dey, D. Fan, R. Schmid, A. Dixit, K. Otsu, T. Touma, A. F. Schilling, and A.-A. Agha-Mohammadi, “Prepare: Predictive proprioception for agile failure event detection in robotic exploration of extreme terrains,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 4338–4343.
  12. K. Weerakoon, A. J. Sathyamoorthy, M. Elnoor, and D. Manocha, “Vapor: Holonomic legged robot navigation in outdoor vegetation using offline reinforcement learning,” arXiv preprint arXiv:2309.07832, 2023.
  13. Z. Fu, A. Kumar, A. Agarwal, H. Qi, J. Malik, and D. Pathak, “Coupling vision and proprioception for navigation of legged robots,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17 273–17 283.
  14. M. Elnoor, A. Sathyamoorthy, K. Weerakoon, and D. Manocha, “Pronav: Proprioceptive traversability estimation for autonomous legged robot navigation in outdoor environments,” arXiv preprint arXiv:2307.09754, 2023.
  15. X. Yao, J. Zhang, and J. Oh, “Rca: Ride comfort-aware visual navigation via self-supervised learning,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022.
  16. C. A. Brooks and K. D. Iagnemma, “Self-supervised classification for planetary rover terrain sensing,” in 2007 IEEE aerospace conference.   IEEE, 2007, pp. 1–9.
  17. A. H. Al-dabbagh and R. Ronsse, “A review of terrain detection systems for applications in locomotion assistance,” Robotics and Autonomous Systems, vol. 133, p. 103628, 2020.
  18. C. Kertész, “Rigidity-based surface recognition for a domestic legged robot,” IEEE Robotics and automation letters, vol. 1, no. 1, pp. 309–315, 2016.
  19. D. Wisth, M. Camurri, and M. Fallon, “Vilens: Visual, inertial, lidar, and leg odometry for all-terrain legged robots,” IEEE Transactions on Robotics, 2022.
  20. S. Jung, J. Lee, X. Meng, B. Boots, and A. Lambert, “V-strong: Visual self-supervised traversability learning for off-road navigation,” arXiv preprint arXiv:2312.16016, 2023.
  21. J. Dallas, M. P. Cole, P. Jayakumar, and T. Ersal, “Terrain adaptive trajectory planning and tracking on deformable terrains,” IEEE Transactions on Vehicular Technology, vol. 70, no. 11, pp. 11 255–11 268, 2021.
  22. F. J. Comin, W. A. Lewinger, C. M. Saaj, and M. C. Matthews, “Trafficability assessment of deformable terrain through hybrid wheel-leg sinkage detection,” Journal of Field Robotics, vol. 34, no. 3, pp. 451–476, 2017.
  23. P. Try and M. Gebhard, “A vibration sensing device using a six-axis imu and an optimized beam structure for activity monitoring,” Sensors, vol. 23, no. 19, p. 8045, 2023.
  24. E. I. Al Khatib, M. A. K. Jaradat, and M. F. Abdel-Hafez, “Low-cost reduced navigation system for mobile robot in indoor/outdoor environments,” IEEE Access, vol. 8, pp. 25 014–25 026, 2020.
  25. Y. D. Yasuda, L. E. G. Martins, and F. A. Cappabianco, “Autonomous visual navigation for mobile robots: A systematic literature review,” ACM Computing Surveys (CSUR), vol. 53, no. 1, pp. 1–34, 2020.
  26. R. Miyamoto, M. Adachi, H. Ishida, T. Watanabe, K. Matsutani, H. Komatsuzaki, S. Sakata, R. Yokota, and S. Kobayashi, “Visual navigation based on semantic segmentation using only a monocular camera as an external sensor,” Journal of Robotics and Mechatronics, vol. 32, no. 6, pp. 1137–1153, 2020.
  27. W. Zhan, C. Xiao, Y. Wen, C. Zhou, H. Yuan, S. Xiu, X. Zou, C. Xie, and Q. Li, “Adaptive semantic segmentation for unmanned surface vehicle navigation,” Electronics, vol. 9, no. 2, p. 213, 2020.
  28. S. Fahmi, V. Barasuol, D. Esteban, O. Villarreal, and C. Semini, “c,” IEEE Transactions on Robotics, 2022.
  29. A. Agarwal, A. Kumar, J. Malik, and D. Pathak, “Legged locomotion in challenging terrains using egocentric vision,” in Conference on Robot Learning, 2022, pp. 403–415.
  30. M. Aladem, S. Baek, S. A. Rawashdeh, et al., “Evaluation of image enhancement techniques for vision-based navigation under low illumination,” Journal of Robotics, vol. 2019, 2019.
  31. K. Haresh, Y. Elvin, W. Garrett, B. Joydeep, and S. Peter, “Learning to extrapolate human preferences for preference aligned path planning,” arXiv preprint arXiv:2309.09912, 2023.
  32. A. Loquercio, A. Kumar, and J. Malik, “Learning visual locomotion with cross-modal supervision,” in 2023 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2023, pp. 7295–7302.
  33. R. García, M. Estrada, M. Ebrahimi, F. Zuppichini, L. Gambardella, A. Giusti, and A. Ijspeert, “Gait-dependent traversability estimation on the k-rock2 robot,” in 2022 26th International Conference on Pattern Recognition (ICPR).   IEEE, 2022, pp. 4204–4210.
  34. H. Karnan, E. Yang, D. Farkash, G. Warnell, J. Biswas, and P. Stone, “Sterling: Self-supervised terrain representation learning from unconstrained robot experience,” in Conference on Robot Learning (CoRL 2023), 2023.
  35. S. Teng, M. W. Mueller, and K. Sreenath, “Legged robot state estimation in slippery environments using invariant extended kalman filter with velocity update,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 3104–3110.
  36. J. Jin, C. Zhang, J. Frey, N. Rudin, M. Mattamala, C. Cadena, and M. Hutter, “Resilient legged local navigation: Learning to traverse with compromised perception end-to-end,” arXiv preprint arXiv:2310.03581, 2023.
  37. T. Homberger, L. Wellhausen, P. Fankhauser, and M. Hutter, “Support surface estimation for legged robots,” in 2019 International Conference on Robotics and Automation (ICRA).   IEEE, 2019, pp. 8470–8476.
  38. K. Weerakoon, A. J. Sathyamoorthy, M. Elnoor, and D. Manocha, “Adventr: Autonomous robot navigation in complex outdoor environments,” arXiv preprint arXiv:2311.08740, 2023.
  39. D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision avoidance,” IEEE Robotics Automation Magazine, vol. 4, no. 1, pp. 23–33, March 1997.
  40. T. Overbye and S. Saripalli, “Path optimization for ground vehicles in off-road terrain,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE, 2021, pp. 7708–7714.
  41. L. Wellhausen and M. Hutter, “Artplanner: Robust legged robot navigation in the field,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
  42. T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning robust perceptive locomotion for quadrupedal robots in the wild,” Science Robotics, vol. 7, no. 62, p. eabk2822, 2022.
  43. H. Kolvenbach, C. Bärtschi, L. Wellhausen, R. Grandia, and M. Hutter, “Haptic inspection of planetary soils with legged robots,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1626–1632, 2019.
  44. A. Maćkiewicz and W. Ratajczak, “Principal components analysis (pca),” Computers & Geosciences, vol. 19, no. 3, pp. 303–342, 1993.
  45. M. Wigness, S. Eum, J. G. Rogers, D. Han, and H. Kwon, “A rugd dataset for autonomous navigation and visual perception in unstructured outdoor environments,” in International Conference on Intelligent Robots and Systems (IROS), 2019.
  46. S. Bezryadin, P. Bourov, and D. Ilinih, “Brightness calculation in digital image processing,” in International Symposium on Technologies for Digital Photo Fulfillment 2007, vol. 1, 2007, pp. 10–15.
  47. H. Tong, M. Li, H. Zhang, and C. Zhang, “Blur detection for digital images using wavelet transform,” in 2004 IEEE International Conference on Multimedia and Expo (ICME).   IEEE, 2004.
  48. K. Purohit, A. B. Shah, and A. N. Rajagopalan, “Learning based single image blur detection and segmentation,” in IPCV Lab, Department of Electrical Engineering, Indian Institute of Technology Madras, India, 2023.
  49. S. Nayak and B. Ravindran, “Reinforcement learning for improving object detection,” in Indian Institute of Technology Madras, Chennai TN, India, 2023.
  50. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  51. K. Weerakoon, A. J. Sathyamoorthy, J. Liang, T. Guan, U. Patel, and D. Manocha, “Graspe: Graph based multimodal fusion for robot navigation in unstructured outdoor environments,” arXiv preprint arXiv:2209.05722, 2022.
  52. K. Weerakoon, A. J. Sathyamoorthy, U. Patel, and D. Manocha, “Terp: Reliable planning in uneven outdoor environments using deep reinforcement learning,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 9447–9453.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mohamed Elnoor (14 papers)
  2. Kasun Weerakoon (21 papers)
  3. Adarsh Jagan Sathyamoorthy (23 papers)
  4. Tianrui Guan (29 papers)
  5. Vignesh Rajagopal (4 papers)
  6. Dinesh Manocha (366 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.