Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 109 tok/s
Gemini 3.0 Pro 52 tok/s Pro
Gemini 2.5 Flash 159 tok/s Pro
Kimi K2 203 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

LANCAR: Leveraging Language for Context-Aware Robot Locomotion in Unstructured Environments (2310.00481v3)

Published 30 Sep 2023 in cs.RO

Abstract: Navigating robots through unstructured terrains is challenging, primarily due to the dynamic environmental changes. While humans adeptly navigate such terrains by using context from their observations, creating a similar context-aware navigation system for robots is difficult. The essence of the issue lies in the acquisition and interpretation of context information, a task complicated by the inherent ambiguity of human language. In this work, we introduce LANCAR, which addresses this issue by combining a context translator with reinforcement learning (RL) agents for context-aware locomotion. LANCAR allows robots to comprehend context information through LLMs sourced from human observers and convert this information into actionable context embeddings. These embeddings, combined with the robot's sensor data, provide a complete input for the RL agent's policy network. We provide an extensive evaluation of LANCAR under different levels of context ambiguity and compare with alternative methods. The experimental results showcase the superior generalizability and adaptability across different terrains. Notably, LANCAR shows at least a 7.4% increase in episodic reward over the best alternatives, highlighting its potential to enhance robotic navigation in unstructured environments. More details and experiment videos could be found in http://raaslab.org/projects/LLM_Context_Estimation/

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. M. H. Raibert, Legged robots that balance. USA: Massachusetts Institute of Technology, 1986.
  2. S. Josef and A. Degani, “Deep reinforcement learning for safe local planning of a ground vehicle in unknown rough terrain,” IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6748–6755, 2020.
  3. X. Xiao, Z. Wang, Z. Xu, B. Liu, G. Warnell, G. Dhamankar, A. Nair, and P. Stone, “Appl: Adaptive planner parameter learning,” Robotics and Autonomous Systems, vol. 154, p. 104132, 2022.
  4. J. Liang, Z. Wang, Y. Cao, J. Chiun, M. Zhang, and G. A. Sartoretti, “Context-aware deep reinforcement learning for autonomous robotic navigation in unknown area,” in 7th Annual Conference on Robot Learning, 2023.
  5. H. Karnan, E. Yang, D. Farkash, G. Warnell, J. Biswas, and P. Stone, “Self-supervised terrain representation learning from unconstrained robot experience,” in ICRA2023 Workshop on Pretraining for Robotics (PT4R), 2023.
  6. H. Ahn, S. Choi, N. Kim, G. Cha, and S. Oh, “Interactive text2pickup networks for natural language-based human–robot collaboration,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3308–3315, 2018.
  7. J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in Neural Information Processing Systems, vol. 35, pp. 24824–24837, 2022.
  8. A. Creswell, M. Shanahan, and I. Higgins, “Selection-inference: Exploiting large language models for interpretable logical reasoning,” arXiv preprint arXiv:2205.09712, 2022.
  9. M. Geva, D. Khashabi, E. Segal, T. Khot, D. Roth, and J. Berant, “Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies,” Transactions of the Association for Computational Linguistics, vol. 9, pp. 346–361, 2021.
  10. H. Nguyen and H. La, “Review of deep reinforcement learning for robot manipulation,” in 2019 Third IEEE International Conference on Robotic Computing (IRC), pp. 590–595, IEEE, 2019.
  11. B. Patel, K. Weerakoon, W. A. Suttle, A. Koppel, B. M. Sadler, A. S. Bedi, and D. Manocha, “Ada-nav: Adaptive trajectory-based sample efficient policy learning for robotic navigation,” arXiv preprint arXiv:2306.06192, 2023.
  12. S. Gangapurwala, M. Geisert, R. Orsolino, M. Fallon, and I. Havoutis, “Rloc: Terrain-aware legged locomotion using reinforcement learning and optimal control,” IEEE Transactions on Robotics, vol. 38, no. 5, pp. 2908–2927, 2022.
  13. J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23–30, IEEE, 2017.
  14. M. Kwon, S. M. Xie, K. Bullard, and D. Sadigh, “Reward design with language models,” arXiv preprint arXiv:2303.00001, 2023.
  15. S. Mirchandani, F. Xia, P. Florence, B. Ichter, D. Driess, M. G. Arenas, K. Rao, D. Sadigh, and A. Zeng, “Large language models as general pattern machines,” arXiv preprint arXiv:2307.04721, 2023.
  16. M. Rahme, I. Abraham, M. Elwin, and T. Murphey, “Spotminimini: Pybullet gym environment for gait modulation with bezier curves,” 2020.
  17. M. Elnoor, A. J. Sathyamoorthy, K. Weerakoon, and D. Manocha, “Pronav: Proprioceptive traversability estimation for autonomous legged robot navigation in outdoor environments,” arXiv preprint arXiv:2307.09754, 2023.
  18. S. LaValle, “Planning algorithms,” Cambridge University Press google schola, vol. 2, pp. 3671–3678, 2006.
  19. J. Canny, The complexity of robot motion planning. MIT press, 1988.
  20. D. Manocha, Algebraic and numeric techniques in modeling and robotics. University of California, Berkeley, 1992.
  21. S. Siva, M. Wigness, J. G. Rogers, L. Quang, and H. Zhang, “Nauts: Negotiation for adaptation to unstructured terrain surfaces,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1733–1740, IEEE, 2022.
  22. T. Guan, R. Song, Z. Ye, and L. Zhang, “Vinet: Visual and inertial-based terrain classification and adaptive navigation over unknown terrain,” in 2023 IEEE international conference on robotics and automation (ICRA), pp. 4106–4112, IEEE, 2023.
  23. S. Fahmi, V. Barasuol, D. Esteban, O. Villarreal, and C. Semini, “Vital: Vision-based terrain-aware locomotion for legged robots,” IEEE Transactions on Robotics, vol. 39, no. 2, pp. 885–904, 2022.
  24. A. Loquercio, A. Kumar, and J. Malik, “Learning visual locomotion with cross-modal supervision,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 7295–7302, IEEE, 2023.
  25. A. Kumar, Z. Fu, D. Pathak, and J. Malik, “Rma: Rapid motor adaptation for legged robots,” arXiv preprint arXiv:2107.04034, 2021.
  26. A. E. Leeper, K. Hsiao, M. Ciocarlie, L. Takayama, and D. Gossow, “Strategies for human-in-the-loop robotic grasping,” in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp. 1–8, 2012.
  27. A. Ajoudani, A. M. Zanchettin, S. Ivaldi, A. Albu-Schäffer, K. Kosuge, and O. Khatib, “Progress and prospects of the human–robot collaboration,” Autonomous Robots, vol. 42, pp. 957–975, 2018.
  28. A. P. Dani, I. Salehi, G. Rotithor, D. Trombetta, and H. Ravichandar, “Human-in-the-loop robot control for human-robot collaboration: Human intention estimation and safe trajectory tracking control for collaborative tasks,” IEEE Control Systems Magazine, vol. 40, no. 6, pp. 29–56, 2020.
  29. M. Raessa, J. C. Y. Chen, W. Wan, and K. Harada, “Human-in-the-loop robotic manipulation planning for collaborative assembly,” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 4, pp. 1800–1813, 2020.
  30. E. Fosch-Villaronga, P. Khanna, H. Drukarch, and B. H. Custers, “A human in the loop in surgery automation,” Nature Machine Intelligence, vol. 3, no. 5, pp. 368–369, 2021.
  31. M. DeDonato, V. Dimitrov, R. Du, R. Giovacchini, K. Knoedler, X. Long, F. Polido, M. A. Gennert, T. Padır, S. Feng, et al., “Human-in-the-loop control of a humanoid robot for disaster response: a report from the darpa robotics challenge trials,” Journal of Field Robotics, vol. 32, no. 2, pp. 275–292, 2015.
  32. A. Z. Ren, A. Dixit, A. Bodrova, S. Singh, S. Tu, N. Brown, P. Xu, L. Takayama, F. Xia, J. Varley, et al., “Robots that ask for help: Uncertainty alignment for large language model planners,” arXiv preprint arXiv:2307.01928, 2023.
  33. Y. Tang, W. Yu, J. Tan, H. Zen, A. Faust, and T. Harada, “Saytap: Language to quadrupedal locomotion,” arXiv preprint arXiv:2306.07580, 2023.
  34. S. Chakraborty, K. Weerakoon, P. Poddar, P. Tokekar, A. S. Bedi, and D. Manocha, “Re-move: An adaptive policy design approach for dynamic environments via language-based feedback,” arXiv preprint arXiv:2303.07622, 2023.
  35. D. Shah, B. Osiński, S. Levine, et al., “Lm-nav: Robotic navigation with large pre-trained models of language, vision, and action,” in Conference on Robot Learning, pp. 492–504, PMLR, 2023.
  36. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020.
  37. K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models,” International Journal of Computer Vision, vol. 130, no. 9, pp. 2337–2348, 2022.
  38. H. Fan, X. Liu, J. Y. H. Fuh, W. F. Lu, and B. Li, “Embodied intelligence in manufacturing: leveraging large language models for autonomous industrial robotics,” Journal of Intelligent Manufacturing, pp. 1–17, 2024.
  39. V. S. Dorbala, J. F. Mullen Jr, and D. Manocha, “Can an embodied agent find your “cat-shaped mug”? llm-based zero-shot object navigation,” IEEE Robotics and Automation Letters, 2023.
  40. Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, and Z. Sui, “A survey for in-context learning,” arXiv preprint arXiv:2301.00234, 2022.
  41. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, X. Chen, K. Choromanski, T. Ding, D. Driess, A. Dubey, C. Finn, et al., “Rt-2: Vision-language-action models transfer web knowledge to robotic control,” arXiv preprint arXiv:2307.15818, 2023.
  42. J. Liang, P. Gao, X. Xiao, A. J. Sathyamoorthy, M. Elnoor, M. C. Lin, and D. Manocha, “Mtg: Mapless trajectory generator with traversability coverage for outdoor navigation,” 2024.
  43. A. Padalkar, A. Pooley, A. Jain, A. Bewley, A. Herzog, A. Irpan, A. Khazatsky, A. Rai, A. Singh, A. Brohan, et al., “Open x-embodiment: Robotic learning datasets and rt-x models,” arXiv preprint arXiv:2310.08864, 2023.
  44. M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakrishnan, K. Hausman, et al., “Do as i can, not as i say: Grounding language in robotic affordances,” arXiv preprint arXiv:2204.01691, 2022.
  45. W. Yu, N. Gileadi, C. Fu, S. Kirmani, K.-H. Lee, M. G. Arenas, H.-T. L. Chiang, T. Erez, L. Hasenclever, J. Humplik, et al., “Language to rewards for robotic skill synthesis,” arXiv preprint arXiv:2306.08647, 2023.
  46. A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu, et al., “Rt-1: Robotics transformer for real-world control at scale,” arXiv preprint arXiv:2212.06817, 2022.
  47. A. Bucker, L. Figueredo, S. Haddadinl, A. Kapoor, S. Ma, and R. Bonatti, “Reshaping robot trajectories using natural language commands: A study of multi-modal data alignment using transformers,” in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 978–984, IEEE, 2022.
  48. A. Bucker, L. Figueredo, S. Haddadin, A. Kapoor, S. Ma, S. Vemprala, and R. Bonatti, “Latte: Language trajectory transformer,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 7287–7294, IEEE, 2023.
  49. O. Mees, J. Borja-Diaz, and W. Burgard, “Grounding language with visual affordances over unstructured data,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 11576–11582, IEEE, 2023.
  50. D. Fu, X. Li, L. Wen, M. Dou, P. Cai, B. Shi, and Y. Qiao, “Drive like a human: Rethinking autonomous driving with large language models,” arXiv preprint arXiv:2307.07162, 2023.
  51. H. Hu and D. Sadigh, “Language instructed reinforcement learning for human-ai coordination,” arXiv preprint arXiv:2304.07297, 2023.
  52. D. Ghosh, J. Rahme, A. Kumar, A. Zhang, R. P. Adams, and S. Levine, “Why generalization in rl is difficult: Epistemic pomdps and implicit partial observability,” Advances in Neural Information Processing Systems, vol. 34, pp. 25502–25515, 2021.
  53. H. Mania, A. Guy, and B. Recht, “Simple random search provides a competitive approach to reinforcement learning,” arXiv preprint arXiv:1803.07055, 2018.
  54. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor,” in International conference on machine learning, pp. 1861–1870, PMLR, 2018.
  55. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
  56. S. Fujimoto, H. Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in International conference on machine learning, pp. 1587–1596, PMLR, 2018.
  57. OpenAI, “Gpt-4 technical report,” 2023.
  58. E. Coumans and Y. Bai, “Pybullet, a python module for physics simulation for games, robotics and machine learning.” http://pybullet.org, 2016–2021.
  59. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” 2019.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 10 likes.

Upgrade to Pro to view all of the tweets about this paper: