Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Correcting Robot Plans with Natural Language Feedback (2204.05186v1)

Published 11 Apr 2022 in cs.RO, cs.AI, cs.CL, cs.CV, and cs.LG

Abstract: When humans design cost or goal specifications for robots, they often produce specifications that are ambiguous, underspecified, or beyond planners' ability to solve. In these cases, corrections provide a valuable tool for human-in-the-loop robot control. Corrections might take the form of new goal specifications, new constraints (e.g. to avoid specific objects), or hints for planning algorithms (e.g. to visit specific waypoints). Existing correction methods (e.g. using a joystick or direct manipulation of an end effector) require full teleoperation or real-time interaction. In this paper, we explore natural language as an expressive and flexible tool for robot correction. We describe how to map from natural language sentences to transformations of cost functions. We show that these transformations enable users to correct goals, update robot motions to accommodate additional user preferences, and recover from planning errors. These corrections can be leveraged to get 81% and 93% success rates on tasks where the original planner failed, with either one or two language corrections. Our method makes it possible to compose multiple constraints and generalizes to unseen scenes, objects, and sentences in simulated environments and real-world environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Pratyusha Sharma (15 papers)
  2. Balakumar Sundaralingam (32 papers)
  3. Valts Blukis (23 papers)
  4. Chris Paxton (59 papers)
  5. Tucker Hermans (57 papers)
  6. Antonio Torralba (178 papers)
  7. Jacob Andreas (116 papers)
  8. Dieter Fox (201 papers)
Citations (85)

Summary

We haven't generated a summary for this paper yet.