Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning-on-the-Drive: Self-supervised Adaptation of Visual Offroad Traversability Models (2306.15226v2)

Published 27 Jun 2023 in cs.RO

Abstract: Autonomous offroad driving is essential for applications like emergency rescue, military operations, and agriculture. Despite progress, systems struggle with high-speed vehicles exceeding 10m/s due to the need for accurate long-range (> 50m) perception for safe navigation. Current approaches are limited by sensor constraints; LiDAR-based methods offer precise short-range data but are noisy beyond 30m, while visual models provide dense long-range measurements but falter with unseen scenarios. To overcome these issues, we introduce ALTER, a learning-on-the-drive perception framework that leverages both sensor types. ALTER uses a self-supervised visual model to learn and adapt from near-range LiDAR measurements, improving long-range prediction in new environments without manual labeling. It also includes a model selection module for better sensor failure response and adaptability to known environments. Testing in two real-world settings showed on average 43.4% better traversability prediction than LiDAR-only and 164% over non-adaptive state-of-the-art (SOTA) visual semantic methods after 45 seconds of online learning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. D. Langer, J. Rosenblatt, and M. Hebert, “A behavior-based system for off-road navigation,” IEEE Transactions on Robotics and Automation, vol. 10, no. 6, pp. 776 – 783, December 1994.
  2. A. Kelly, O. Amidi, M. Bode, M. Happold, H. Herman, T. Pilarski, P. Rander, A. Stentz, N. Vallidis, and R. Warner, “Toward reliable off road autonomous vehicles operating in challenging environments,” in Proceedings of 9th International Symposium on Experimental Robotics (ISER ’04), June 2004, pp. 599 – 608.
  3. J. A. Bagnell, D. Bradley, D. Silver, B. Sofman, and A. Stentz, “Learning for autonomous navigation,” IEEE Robotics and Automation Magazine, vol. 17, no. 2, pp. 74–84, 2010.
  4. D. Maturana, P.-W. Chou, M. Uenoyama, and S. Scherer, “Real-time semantic mapping for autonomous off-road navigation,” in Proceedings of 11th International Conference on Field and Service Robotics (FSR ’17), September 2017, pp. 335 – 350.
  5. S. Triest, M. G. Castro, P. Maheshwari, M. Sivaprakasam, W. Wang, and S. Scherer, “Learning risk-aware costmaps via inverse reinforcement learning for off-road navigation,” 2023.
  6. D. Maturana, P.-W. Chou, M. Uenoyama, and S. Scherer, “Real-time semantic mapping for autonomous off-road navigation,” in Field and Service Robotics.   Springer, 2018, pp. 335–350.
  7. T. Guan, D. Kothandaraman, R. Chandra, A. J. Sathyamoorthy, K. Weerakoon, and D. Manocha, “Ga-nav: Efficient terrain segmentation for robot navigation in unstructured outdoor environments,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 8138–8145, 2022.
  8. H. Dahlkamp, A. Kaehler, D. Stavens, S. Thrun, and G. R. Bradski, “Self-supervised monocular road detection in desert terrain.” in Robotics: science and systems, vol. 38.   Philadelphia, 2006.
  9. D. Maier, M. Bennewitz, and C. Stachniss, “Self-supervised obstacle detection for humanoid navigation using monocular vision and sparse laser data,” in 2011 IEEE International Conference on Robotics and Automation.   IEEE, 2011, pp. 1263–1269.
  10. S. Zhou, J. Xi, M. W. McDaniel, T. Nishihata, P. Salesses, and K. Iagnemma, “Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain,” Journal of Field Robotics, vol. 29, no. 2, pp. 277–297, 2012.
  11. B. Sofman, E. Lin, J. A. Bagnell, N. Vandapel, and A. Stentz, “Improving robot navigation through self-supervised online learning,” in Robotics: Science and Systems, 2006.
  12. T. Overbye and S. Saripalli, “G-vom: A gpu accelerated voxel off-road mapping system,” in 2022 IEEE Intelligent Vehicles Symposium (IV), 2022, pp. 1480–1486.
  13. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).   Minneapolis, Minnesota: Association for Computational Linguistics, June 2019, pp. 4171–4186.
  14. E. Tiu, E. Talius, P. Patel, C. Langlotz, A. Ng, and P. Rajpurkar, “Expert-level detection of pathologies from unannotated chest x-ray images via self-supervised learning,” Nature Biomedical Engineering, vol. 6, pp. 1–8, 09 2022.
  15. M. Sivaprakasam, S. Triest, W. Wang, P. Yin, and S. Scherer, “Improving off-road planning techniques with learned costs from physical interactions,” in 2021 IEEE International Conference on Robotics and Automation (ICRA).   IEEE Press, 2021, p. 4844–4850.
  16. M. Guaman Castro, S. Triest, W. Wang, J. M. Gregory, F. Sanchez, J. G. Rogers III, and S. Scherer, “How does it feel? self-supervised costmap learning for off-road vehicle traversability,” IEEE, 2023.
  17. O. Mayuku, B. W. Surgenor, and J. A. Marshall, “A self-supervised near-to-far approach for terrain-adaptive off-road autonomous driving,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 14 054–14 060.
  18. L. Wellhausen, A. Dosovitskiy, R. Ranftl, K. Walas, C. Cadena, and M. Hutter, “Where should i walk? predicting terrain properties from images via self-supervised learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1509–1516, 2019.
  19. J. Zürn, W. Burgard, and A. Valada, “Self-supervised visual terrain classification from unsupervised acoustic feature learning,” IEEE Transactions on Robotics, vol. 37, no. 2, pp. 466–481, 2021.
  20. R. Schmid, D. Atha, F. Schöller, S. Dey, S. Fakoorian, K. Otsu, B. Ridge, M. Bjelonic, L. Wellhausen, M. Hutter, and A.-a. Agha-mohammadi, “Self-supervised traversability prediction by learning to reconstruct safe terrain,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 12 419–12 425.
  21. X. Yao, J. Zhang, and J. Oh, “Rca: Ride comfort-aware visual navigation via self-supervised learning,” 2022.
  22. R. Hadsell, A. Erkan, P. Sermanet, M. Scoffier, U. Muller, and Y. LeCun, “Deep belief net learning in a long-range vision system for autonomous off-road driving,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.   IEEE, 2008, pp. 628–633.
  23. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  24. K.-Y. Lee, Y. Zhong, and Y.-X. Wang, “Do pre-trained models benefit equally in continual learning?” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), January 2023, pp. 6485–6493.
  25. R. M. French, “Catastrophic forgetting in connectionist networks,” Trends in Cognitive Sciences, vol. 3, no. 4, pp. 128–135, 1999.
  26. C. de Masson d’Autume, S. Ruder, L. Kong, and D. Yogatama, “Episodic memory in lifelong language learning,” CoRR, vol. abs/1906.01076, 2019.
  27. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, 2017.
  28. M. Happold, M. Ollis, and N. Johnson, “Enhancing supervised terrain classification with predictive unsupervised learning,” in Proceedings of Robotics: Science and Systems, Philadelphia, USA, August 2006.
  29. S. Zhao, H. Zhang, P. Wang, L. Nogueira, and S. Scherer, “Super odometry: Imu-centric lidar-visual-inertial estimator for challenging environments,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 8729–8736.
  30. J.-F. Lalonde, N. Vandapel, D. F. Huber, and M. Hebert, “Natural terrain classification using three-dimensional ladar data for ground robot mobility,” Journal of Field Robotics, vol. 23, no. 10, pp. 839–861, 2006.
  31. C. Wellington, A. Courville, and A. T. Stentz, “A generative model of terrain for autonomous navigation in vegetation,” The International Journal of Robotics Research, vol. 25, no. 12, pp. 1287–1304, 2006.
  32. J. Amanatides and A. Woo, “A fast voxel traversal algorithm for ray tracing,” Proceedings of EuroGraphics, vol. 87, 08 1987.
  33. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” 2015.
  34. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” 2017, cite arxiv:1704.04861.
  35. V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
  36. Y. Yang, X. Meng, W. Yu, T. Zhang, J. Tan, and B. Boots, “Learning semantics-aware locomotion skills from human demonstration,” in Conference on Robot Learning.   PMLR, 2023, pp. 2205–2214.
  37. C. Wang, Y. Qiu, D. Gao, and S. Scherer, “Lifelong graph learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
  38. C. Wang, Y. Qiu, W. Wang, Y. Hu, S. Kim, and S. Scherer, “Unsupervised online learning for robotic interestingness with visual memory,” IEEE Transactions on Robotics (T-RO), 2021.
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com