Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Follow the Footprints: Self-supervised Traversability Estimation for Off-road Vehicle Navigation based on Geometric and Visual Cues (2402.15363v1)

Published 23 Feb 2024 in cs.RO

Abstract: In this study, we address the off-road traversability estimation problem, that predicts areas where a robot can navigate in off-road environments. An off-road environment is an unstructured environment comprising a combination of traversable and non-traversable spaces, which presents a challenge for estimating traversability. This study highlights three primary factors that affect a robot's traversability in an off-road environment: surface slope, semantic information, and robot platform. We present two strategies for estimating traversability, using a guide filter network (GFN) and footprint supervision module (FSM). The first strategy involves building a novel GFN using a newly designed guide filter layer. The GFN interprets the surface and semantic information from the input data and integrates them to extract features optimized for traversability estimation. The second strategy involves developing an FSM, which is a self-supervision module that utilizes the path traversed by the robot in pre-driving, also known as a footprint. This enables the prediction of traversability that reflects the characteristics of the robot platform. Based on these two strategies, the proposed method overcomes the limitations of existing methods, which require laborious human supervision and lack scalability. Extensive experiments in diverse conditions, including automobiles and unmanned ground vehicles, herbfields, woodlands, and farmlands, demonstrate that the proposed method is compatible for various robot platforms and adaptable to a range of terrains. Code is available at https://github.com/yurimjeon1892/FtFoot.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. A. Howard and H. Seraji, “Real-time assessment of terrain traversability for autonomous rover navigation,” in Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000)(Cat. No. 00CH37113), vol. 1.   IEEE, 2000, pp. 58–63.
  2. A. Howard, H. Seraji, and E. Tunstel, “A rule-based fuzzy traversability index for mobile robot navigation,” in Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164), vol. 3.   IEEE, 2001, pp. 3067–3071.
  3. D. Kim, S. M. Oh, and J. M. Rehg, “Traversability classification for ugv navigation: A comparison of patch and superpixel representations,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2007, pp. 3166–3173.
  4. A. Talukder, R. Manduchi, A. Rankin, and L. Matthies, “Fast and reliable obstacle detection and segmentation for cross-country navigation,” in Intelligent Vehicle Symposium, 2002. IEEE, vol. 2.   IEEE, 2002, pp. 610–618.
  5. M. Kragh, R. N. Jørgensen, and H. Pedersen, “Object detection and terrain classification in agricultural fields using 3d lidar data,” in Computer Vision Systems: 10th International Conference, ICVS 2015, Copenhagen, Denmark, July 6-9, 2015, Proceedings.   Springer, 2015, pp. 188–197.
  6. D. Maturana, P.-W. Chou, M. Uenoyama, and S. Scherer, “Real-time semantic mapping for autonomous off-road navigation,” in Field and Service Robotics: Results of the 11th International Conference.   Springer, 2018, pp. 335–350.
  7. L. Wellhausen, A. Dosovitskiy, R. Ranftl, K. Walas, C. Cadena, and M. Hutter, “Where should i walk? predicting terrain properties from images via self-supervised learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1509–1516, 2019.
  8. G. G. Waibel, T. Löw, M. Nass, D. Howard, T. Bandyopadhyay, and P. V. K. Borges, “How rough is the path? terrain traversability estimation for local and global path planning,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 16 462–16 473, 2022.
  9. M. G. Castro, S. Triest, W. Wang, J. M. Gregory, F. Sanchez, J. G. Rogers III, and S. Scherer, “How does it feel? self-supervised costmap learning for off-road vehicle traversability,” arXiv preprint arXiv:2209.10788, 2022.
  10. A. J. Sathyamoorthy, K. Weerakoon, T. Guan, J. Liang, and D. Manocha, “Terrapn: Unstructured terrain navigation using online self-supervised learning,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2022, pp. 7197–7204.
  11. M. V. Gasparino, A. N. Sivakumar, Y. Liu, A. E. Velasquez, V. A. Higuti, J. Rogers, H. Tran, and G. Chowdhary, “Wayfast: Navigation with predictive traversability in the field,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10 651–10 658, 2022.
  12. P. Jiang, P. Osteen, M. Wigness, and S. Saripalli, “Rellis-3d dataset: Data, benchmarks and analysis,” 2020.
  13. C. Min, W. Jiang, D. Zhao, J. Xu, L. Xiao, Y. Nie, and B. Dai, “Orfd: A dataset and benchmark for off-road freespace detection,” in 2022 International Conference on Robotics and Automation (ICRA).   IEEE, 2022, pp. 2532–2538.
  14. J. Yao, S. Ramalingam, Y. Taguchi, Y. Miki, and R. Urtasun, “Estimating drivable collision-free space from monocular video,” in 2015 IEEE Winter Conference on Applications of Computer Vision.   IEEE, 2015, pp. 420–427.
  15. S. Tsutsui, T. Kerola, S. Saito, and D. J. Crandall, “Minimizing supervision for free-space segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 988–997.
  16. N. Hirose, A. Sadeghian, M. Vázquez, P. Goebel, and S. Savarese, “Gonet: A semi-supervised deep learning approach for traversability estimation,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).   IEEE, 2018, pp. 3044–3051.
  17. N. Hirose, A. Sadeghian, F. Xia, R. Martín-Martín, and S. Savarese, “Vunet: Dynamic scene view synthesis for traversability estimation using an rgb camera,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 2062–2069, 2019.
  18. X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool, “Dynamic filter networks,” Advances in neural information processing systems, vol. 29, 2016.
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  20. J. Zhou, V. Jampani, Z. Pi, Q. Liu, and M.-H. Yang, “Decoupled dynamic filter networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6647–6656.
  21. J. Tang, F.-P. Tian, W. Feng, J. Li, and P. Tan, “Learning guided convolutional network for depth completion,” IEEE Transactions on Image Processing, vol. 30, pp. 1116–1129, 2020.
  22. Z. Pan, P. Jiang, Y. Wang, C. Tu, and A. G. Cohn, “Scribble-supervised semantic segmentation by uncertainty reduction on neural representation and self-supervision on neural eigenspace,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7416–7425.
  23. L. Lovász, “Random walks on graphs,” Combinatorics, Paul erdos is eighty, vol. 2, no. 1-46, p. 4, 1993.
  24. A. Shaban, X. Meng, J. Lee, B. Boots, and D. Fox, “Semantic terrain classification for off-road autonomous driving,” in Conference on Robot Learning.   PMLR, 2022, pp. 619–629.
  25. T. Guan, D. Kothandaraman, R. Chandra, A. J. Sathyamoorthy, K. Weerakoon, and D. Manocha, “Ga-nav: Efficient terrain segmentation for robot navigation in unstructured outdoor environments,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8138–8145, 2022.
  26. R. Agishev, T. Petříček, and K. Zimmermann, “Trajectory optimization using learned robot-terrain interaction model in exploration of large subterranean environments,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3365–3371, 2022.
  27. S. Karaman and E. Frazzoli, “Sampling-based algorithms for optimal motion planning,” The international journal of robotics research, vol. 30, no. 7, pp. 846–894, 2011.
  28. E. Coumans and Y. Bai, “Pybullet, a python module for physics simulation for games, robotics and machine learning,” http://pybullet.org, 2016–2021.
  29. A. A. Taha and A. Hanbury, “An efficient algorithm for calculating the exact hausdorff distance,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 11, pp. 2153–2163, 2015.

Summary

We haven't generated a summary for this paper yet.