Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforced Imitation in Heterogeneous Action Space (1904.03438v2)

Published 6 Apr 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Imitation learning is an effective alternative approach to learn a policy when the reward function is sparse. In this paper, we consider a challenging setting where an agent and an expert use different actions from each other. We assume that the agent has access to a sparse reward function and state-only expert observations. We propose a method which gradually balances between the imitation learning cost and the reinforcement learning objective. In addition, this method adapts the agent's policy based on either mimicking expert behavior or maximizing sparse reward. We show, through navigation scenarios, that (i) an agent is able to efficiently leverage sparse rewards to outperform standard state-only imitation learning, (ii) it can learn a policy even when its actions are different from the expert, and (iii) the performance of the agent is not bounded by that of the expert, due to the optimized usage of sparse rewards.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Konrad Zolna (24 papers)
  2. Negar Rostamzadeh (38 papers)
  3. Yoshua Bengio (601 papers)
  4. Sungjin Ahn (51 papers)
  5. Pedro O. Pinheiro (24 papers)
Citations (11)

Summary

We haven't generated a summary for this paper yet.