Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Aligning Human Intent from Imperfect Demonstrations with Confidence-based Inverse soft-Q Learning (2312.11194v3)

Published 18 Dec 2023 in cs.RO

Abstract: Imitation learning attracts much attention for its ability to allow robots to quickly learn human manipulation skills through demonstrations. However, in the real world, human demonstrations often exhibit random behavior that is not intended by humans. Collecting high-quality human datasets is both challenging and expensive. Consequently, robots need to have the ability to learn behavioral policies that align with human intent from imperfect demonstrations. Previous work uses confidence scores to extract useful information from imperfect demonstrations, which relies on access to ground truth rewards or active human supervision. In this paper, we propose a transition-based method to obtain fine-grained confidence scores for data without the above efforts, which can increase the success rate of the baseline algorithm by 40.3$\%$ on average. We develop a generalized confidence-based imitation learning framework for guiding policy learning, called Confidence-based Inverse soft-Q Learning (CIQL), as shown in Fig.1. Based on this, we analyze two ways of processing noise and find that penalization is more aligned with human intent than filtering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xizhou Bu (1 paper)
  2. Zhiqiang Ma (19 papers)
  3. Zhengxiong Liu (4 papers)
  4. Wenjuan Li (26 papers)
  5. Panfeng Huang (16 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com