Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Dense Rewards for Contact-Rich Manipulation Tasks (2011.08458v1)

Published 17 Nov 2020 in cs.RO

Abstract: Rewards play a crucial role in reinforcement learning. To arrive at the desired policy, the design of a suitable reward function often requires significant domain expertise as well as trial-and-error. Here, we aim to minimize the effort involved in designing reward functions for contact-rich manipulation tasks. In particular, we provide an approach capable of extracting dense reward functions algorithmically from robots' high-dimensional observations, such as images and tactile feedback. In contrast to state-of-the-art high-dimensional reward learning methodologies, our approach does not leverage adversarial training, and is thus less prone to the associated training instabilities. Instead, our approach learns rewards by estimating task progress in a self-supervised manner. We demonstrate the effectiveness and efficiency of our approach on two contact-rich manipulation tasks, namely, peg-in-hole and USB insertion. The experimental results indicate that the policies trained with the learned reward function achieves better performance and faster convergence compared to the baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Zheng Wu (44 papers)
  2. Wenzhao Lian (14 papers)
  3. Vaibhav Unhelkar (10 papers)
  4. Masayoshi Tomizuka (261 papers)
  5. Stefan Schaal (73 papers)
Citations (34)

Summary

We haven't generated a summary for this paper yet.