Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MonoForce: Self-supervised Learning of Physics-informed Model for Predicting Robot-terrain Interaction (2309.09007v5)

Published 16 Sep 2023 in cs.RO

Abstract: While autonomous navigation of mobile robots on rigid terrain is a well-explored problem, navigating on deformable terrain such as tall grass or bushes remains a challenge. To address it, we introduce an explainable, physics-aware and end-to-end differentiable model which predicts the outcome of robot-terrain interaction from camera images, both on rigid and non-rigid terrain. The proposed MonoForce model consists of a black-box module which predicts robot-terrain interaction forces from onboard cameras, followed by a white-box module, which transforms these forces and a control signals into predicted trajectories, using only the laws of classical mechanics. The differentiable white-box module allows backpropagating the predicted trajectory errors into the black-box module, serving as a self-supervised loss that measures consistency between the predicted forces and ground-truth trajectories of the robot. Experimental evaluation on a public dataset and our data has shown that while the prediction capabilities are comparable to state-of-the-art algorithms on rigid terrain, MonoForce shows superior accuracy on non-rigid terrain such as tall grass or bushes. To facilitate the reproducibility of our results, we release both the code and datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. S. Fabian, S. Kohlbrecher, and O. Von Stryk, “Pose prediction for mobile ground robots in uneven terrain based on difference of heightmaps,” in 2020 IEEE Int. Symp. on Safety, Security, and Rescue Robotics (SSRR), 2020, pp. 49–56.
  2. S. Dogru and L. Marques, “An improved kinematic model for skid-steered wheeled platforms,” Autonomous Robots, vol. 45, no. 2, pp. 229–243, 2021.
  3. L. Wellhausen, A. Dosovitskiy, R. Ranftl, K. Walas, C. Cadena, and M. Hutter, “Where should I walk? Predicting terrain properties from images via self-supervised learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, 2019.
  4. M. Guaman Castro, S. Triest, W. Wang, J. M. Gregory, F. Sanchez, J. G. Rogers III, and S. Scherer, “How does it feel? self-supervised costmap learning for off-road vehicle traversability,” in ICRA, 2023.
  5. A. Li, C. Yang, J. Frey, J. Lee, C. Cadena, and M. Hutter, “Seeing through the grass: Semantic pointcloud filter for support surface learning,” arXiv preprint arXiv:2305.07995, 2023.
  6. D. Inoue, M. Konyo, K. Ohno, and S. Tadokoro, “Contact points detection for tracked mobile robots using inclination of track chains,” in 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 2008, pp. 194–199.
  7. C. Godard, O. Mac Aodha, M. Firman, and G. J. Brostow, “Digging into self-supervised monocular depth prediction,” 2019.
  8. J. Philion and S. Fidler, “Lift, splat, shoot: Encoding images from arbitrary camera rigs by implicitly unprojecting to 3d,” in Proceedings of the European Conference on Computer Vision, 2020.
  9. B. Amos, I. Jimenez, J. Sacks, B. Boots, and J. Z. Kolter, “Differentiable mpc for end-to-end planning and control,” in Advances in Neural Information Processing Systems, vol. 31, 2018.
  10. S. Palazzo, D. C. Guastella, L. Cantelli, P. Spadaro, F. Rundo, G. Muscato, D. Giordano, and C. Spampinato, “Domain adaptation for outdoor robot traversability estimation from rgb data with safety-preserving loss,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 10 014–10 021, 2020.
  11. D. Silver, J. A. Bagnell, and A. Stentz, “Learning from demonstration for autonomous navigation in complex unstructured terrain,” The Int. J. of Robotics Research, vol. 29, no. 12, pp. 1565–1592, 2010.
  12. W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun, “End-to-end interpretable neural motion planner,” in 2019 IEEE/CVF Conf. on Comput. Vision and Pattern Recognit. (CVPR), 2019, pp. 8652–8661.
  13. M. N. Ricky T. Q. Chen, Brandon Amos, “Learning neural event functions for ordinary differential equations,” in ICLR, 2021.
  14. C. D. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem, “Brax - a differentiable physics engine for large scale rigid body simulation,” 2021. [Online]. Available: http://github.com/google/brax
  15. M. Macklin, “Warp: A high-performance python framework for gpu simulation and graphics,” https://github.com/nvidia/warp, March 2022, nVIDIA GPU Technology Conference (GTC).
  16. Y. Wang, H. Li, X. Ning, and Z. Shi, “A new interpolation method in mesh reconstruction from 3d point cloud,” in Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, 2011, pp. 235–242.
  17. K. Bagi, “The contact dynamics method,” in Computational Modeling of Masonry Structures Using the Discrete Element Method.   IGI Global, 2016, pp. 103–122.
  18. P. Jiang, P. Osteen, M. Wigness, and S. Saripalli, “Rellis-3d dataset: Data, benchmarks and analysis,” 2020.
  19. F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Comparing ICP variants on real-world data sets,” Autonomous Robots, vol. 34, no. 3, pp. 133–148, 2013. [Online]. Available: https://doi.org/10.1007/s10514-013-9327-2
  20. V. Šalanský, K. Zimmermann, T. Petříček, and T. Svoboda, “Pose consistency KKT-loss for weakly supervised learning of robot-terrain interaction model,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5477–5484, 2021.

Summary

We haven't generated a summary for this paper yet.