Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections (2311.10678v2)

Published 17 Nov 2023 in cs.RO, cs.AI, and cs.LG

Abstract: Today's robot policies exhibit subpar performance when faced with the challenge of generalizing to novel environments. Human corrective feedback is a crucial form of guidance to enable such generalization. However, adapting to and learning from online human corrections is a non-trivial endeavor: not only do robots need to remember human feedback over time to retrieve the right information in new settings and reduce the intervention rate, but also they would need to be able to respond to feedback that can be arbitrary corrections about high-level human preferences to low-level adjustments to skill parameters. In this work, we present Distillation and Retrieval of Online Corrections (DROC), a LLM-based system that can respond to arbitrary forms of language feedback, distill generalizable knowledge from corrections, and retrieve relevant past experiences based on textual and visual similarity for improving performance in novel settings. DROC is able to respond to a sequence of online language corrections that address failures in both high-level task plans and low-level skill primitives. We demonstrate that DROC effectively distills the relevant information from the sequence of online corrections in a knowledge base and retrieves that knowledge in settings with new task or object instances. DROC outperforms other techniques that directly generate robot code via LLMs by using only half of the total number of corrections needed in the first round and requires little to no corrections after two iterations. We show further results, videos, prompts and code on https://sites.google.com/stanford.edu/droc .

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Lihan Zha (4 papers)
  2. Yuchen Cui (19 papers)
  3. Li-Heng Lin (3 papers)
  4. Minae Kwon (10 papers)
  5. Montserrat Gonzalez Arenas (8 papers)
  6. Andy Zeng (54 papers)
  7. Fei Xia (111 papers)
  8. Dorsa Sadigh (162 papers)
Citations (25)
Youtube Logo Streamline Icon: https://streamlinehq.com