Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RePLan: Robotic Replanning with Perception and Language Models (2401.04157v2)

Published 8 Jan 2024 in cs.RO
RePLan: Robotic Replanning with Perception and Language Models

Abstract: Advancements in LLMs have demonstrated their potential in facilitating high-level reasoning, logical reasoning and robotics planning. Recently, LLMs have also been able to generate reward functions for low-level robot actions, effectively bridging the interface between high-level planning and low-level robot control. However, the challenge remains that even with syntactically correct plans, robots can still fail to achieve their intended goals due to imperfect plans or unexpected environmental issues. To overcome this, Vision LLMs (VLMs) have shown remarkable success in tasks such as visual question answering. Leveraging the capabilities of VLMs, we present a novel framework called Robotic Replanning with Perception and LLMs (RePLan) that enables online replanning capabilities for long-horizon tasks. This framework utilizes the physical grounding provided by a VLM's understanding of the world's state to adapt robot actions when the initial plan fails to achieve the desired goal. We developed a Reasoning and Control (RC) benchmark with eight long-horizon tasks to test our approach. We find that RePLan enables a robot to successfully adapt to unforeseen obstacles while accomplishing open-ended, long-horizon goals, where baseline models cannot, and can be readily applied to real robots. Find more information at https://replan-lm.github.io/replan.github.io/

Overview of the RePLan Framework

RePLan represents an innovative framework that addresses a critical challenge in robotics: enabling robots to perform long-horizon tasks with minimal human intervention. The paper presents a system that can autonomously generate and revise plans for robots by integrating LLMs and Vision LLMs (VLMs). This synergistic approach allows robots to form high-level plans and then translate them into specific low-level actions.

Bridging High-level Planning and Low-level Control

Traditional methods for long-term planning in robotics, such as Hierarchical Reinforcement Learning (HRL) or Imitation Learning (IL), often require expansive domain knowledge and extensive datasets for task learning. By contrast, the use of LLMs offers considerable promise given their capability in high-level reasoning. However, one of the key challenges in the application of LLMs is reconciling their open-ended text generation with the more constrained instructions needed by robots for task execution. Additionally, the task environment is dynamic, and unforeseen changes require robots to adapt quickly. This is where RePLan steps in, combining the high-level contextual understanding of LLMs with real-time scene interpretation from VLMs, thus enabling precise robot task execution and real-time adjustments to the plan.

Integrating Visual Feedback into Replanning

The agility of the RePLan system is in its real-time replanning capabilities. It uses a multi-layered structure with two planners: a high-level planner generates the overarching strategy for the task at hand, while the secondary low-level planner translates these plans into detailed motor actions. Both levels of planning are screened by a verifier to minimize errors. If an initially executed plan does not yield success due to an unexpected incident or environmental change, the robot does not just attempt to repeat the same process. Instead, it calls upon the VLM Perceiver for insights into what went wrong. The Perceiver, trained in tasks such as visual question answering, provides feedback that influences the robot's next course of action.

Testing the Capabilities

The capabilities of RePLan were demonstrated in four different simulated environments, each comprising unique challenges that required a robot to complete multiple steps or adapt to changes. Compared to existing models, RePLan showed a tremendous increase in success rates almost 4 times that of competitive methods in completing a variety of tasks. This underscores its potential to deal effectively with the complexity and variability inherent in real-world robotic applications.

Conclusion

In conclusion, RePLan is a noteworthy step toward true robotic autonomy. With its innovative combination of LLMs and VLMs for planning and execution, it tackles the prevalent problem of rigid task planning that cannot accommodate dynamic environments. Its successful real-time adjustments in response to unforeseen changes mark a shift toward more adaptive, reliable, and intelligent robotic systems. While it's not without limitations, such as a reliance on the accuracies of LLMs and VLMs for interpretation, RePLan presents a fertile ground for further research and development in the field of robotics.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (49)
  1. PDDL - the planning domain definition language. Tech. Rep., 1998.
  2. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023.
  3. A heuristic search approach to planning with temporally extended preferences. Artif. Intell., 2009.
  4. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
  5. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817, 2022.
  6. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818, 2023a.
  7. Do as i can, not as i say: Grounding language in robotic affordances. In Conference on Robot Learning, pp.  287–318. PMLR, 2023b.
  8. Plausible may not be faithful: Probing object hallucination in vision-language pre-training. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pp.  2136–2148, Dubrovnik, Croatia, May 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.eacl-main.156. URL https://aclanthology.org/2023.eacl-main.156.
  9. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.
  10. Model predictive control: Theory and practice—a survey. Automatica, 25(3):335–348, 1989.
  11. PDDLStream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning. In Proceedings of the 30th Int. Conf. on Automated Planning and Scheduling (ICAPS), pp.  440–448. AAAI Press, 2020.
  12. Integrated task and motion planning. Annual review of control, robotics, and autonomous systems, 4:265–293, 2021.
  13. An approach to temporal planning and scheduling in domains with predictable exogenous events. Journal of Artificial Intelligence Research, 25:187–231, 2006.
  14. Using natural language for reward shaping in reinforcement learning. arXiv preprint arXiv:1903.02020, 2019.
  15. Scaling up and distilling down: Language-guided robot skill acquisition. arXiv preprint arXiv:2307.14535, 2023.
  16. Furniturebench: Reproducible real-world benchmark for long-horizon complex manipulation. arXiv preprint arXiv:2305.12821, 2023.
  17. Predictive Sampling: Real-time Behaviour Synthesis with MuJoCo. dec 2022. doi: 10.48550/arXiv.2212.00541. URL https://arxiv.org/abs/2212.00541.
  18. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. arXiv preprint arXiv:2201.07207, 2022a.
  19. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022b.
  20. Imitation learning: A survey of learning methods. ACM Computing Surveys (CSUR), 50(2):1–35, 2017.
  21. Learning to search in task and motion planning with streams. IEEE Robotics and Automation Letters, 8(4):1983–1990, 2023.
  22. Segment anything. arXiv:2304.02643, 2023.
  23. Reward design with language models. arXiv preprint arXiv:2303.00001, 2023.
  24. Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation. In Conference on Robot Learning, pp.  80–93. PMLR, 2023.
  25. Code as policies: Language model programs for embodied control. arXiv preprint, 2022. doi: 10.48550/arXiv.2209.07753.
  26. Inferring rewards from language in context. arXiv preprint arXiv:2204.02515, 2022.
  27. Text2motion: From natural language instructions to feasible plans. arXiv preprint arXiv:2303.12153, 2023.
  28. Improved baselines with visual instruction tuning, 2023.
  29. Mind’s eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359, 2022.
  30. Zero-shot reward specification via grounded natural language. In International Conference on Machine Learning, pp.  14743–14752. PMLR, 2022.
  31. Pddl-the planning domain definition language. 1998.
  32. A universal system for digitization and automatic execution of the chemical synthesis literature. Science, 370(6512):101–108, 2020.
  33. Chatmpc: Natural language based mpc personalization. arXiv preprint arXiv:2309.05952, 2023.
  34. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp.  8748–8763. PMLR, 2021.
  35. Sayplan: Grounding large language models using 3d scene graphs for scalable task planning. In 7th Annual Conference on Robot Learning, 2023. URL https://openreview.net/forum?id=wMpOMO0Ss7a.
  36. James B Rawlings. Tutorial overview of model predictive control. IEEE control systems magazine, 20(3):38–52, 2000.
  37. Progprompt: Generating situated robot task plans using large language models. arXiv preprint arXiv:2209.11302, 2022.
  38. Errors are useful prompts: Instruction guided task programming with verifier-assisted iterative prompting. arXiv preprint arXiv:2303.14100, 2023.
  39. Open-world object manipulation using pre-trained vision-language models. arXiv preprint arXiv:2303.00905, 2023.
  40. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.  5026–5033. IEEE, 2012. doi: 10.1109/IROS.2012.6386109.
  41. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv: Arxiv-2305.16291, 2023a.
  42. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023b.
  43. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023c.
  44. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
  45. M-ember: Tackling long-horizon mobile manipulation via factorized domain transfer. arXiv preprint arXiv:2305.13567, 2023.
  46. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023.
  47. Text2reward: Automated dense reward function generation for reinforcement learning. arXiv preprint arXiv:2309.11489, 2023.
  48. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=WE_vluYUL-X.
  49. Language to rewards for robotic skill synthesis. arXiv preprint arXiv:2306.08647, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Marta Skreta (12 papers)
  2. Zihan Zhou (90 papers)
  3. Jia Lin Yuan (2 papers)
  4. Kourosh Darvish (17 papers)
  5. Alán Aspuru-Guzik (226 papers)
  6. Animesh Garg (129 papers)
Citations (18)
Github Logo Streamline Icon: https://streamlinehq.com