Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Auxiliary Reward Generation with Transition Distance Representation Learning (2402.07412v1)

Published 12 Feb 2024 in cs.LG and cs.AI

Abstract: Reinforcement learning (RL) has shown its strength in challenging sequential decision-making problems. The reward function in RL is crucial to the learning performance, as it serves as a measure of the task completion degree. In real-world problems, the rewards are predominantly human-designed, which requires laborious tuning, and is easily affected by human cognitive biases. To achieve automatic auxiliary reward generation, we propose a novel representation learning approach that can measure the ``transition distance'' between states. Building upon these representations, we introduce an auxiliary reward generation technique for both single-task and skill-chaining scenarios without the need for human knowledge. The proposed approach is evaluated in a wide range of manipulation tasks. The experiment results demonstrate the effectiveness of measuring the transition distance between states and the induced improvement by auxiliary rewards, which not only promotes better learning efficiency but also increases convergent stability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Siyuan Li (140 papers)
  2. Shijie Han (8 papers)
  3. Yingnan Zhao (8 papers)
  4. By Liang (1 paper)
  5. Peng Liu (372 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.