Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Interactive Reinforcement Agent Planning with Human Demonstration (1904.08621v1)

Published 18 Apr 2019 in cs.AI, cs.HC, and cs.LG

Abstract: TAMER has proven to be a powerful interactive reinforcement learning method for allowing ordinary people to teach and personalize autonomous agents' behavior by providing evaluative feedback. However, a TAMER agent planning with UCT---a Monte Carlo Tree Search strategy, can only update states along its path and might induce high learning cost especially for a physical robot. In this paper, we propose to drive the agent's exploration along the optimal path and reduce the learning cost by initializing the agent's reward function via inverse reinforcement learning from demonstration. We test our proposed method in the RL benchmark domain---Grid World---with different discounts on human reward. Our results show that learning from demonstration can allow a TAMER agent to learn a roughly optimal policy up to the deepest search and encourage the agent to explore along the optimal path. In addition, we find that learning from demonstration can improve the learning efficiency by reducing total feedback, the number of incorrect actions and increasing the ratio of correct actions to obtain an optimal policy, allowing a TAMER agent to converge faster.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Guangliang Li (6 papers)
  2. Randy Gomez (8 papers)
  3. Keisuke Nakamura (5 papers)
  4. Jinying Lin (2 papers)
  5. Qilei Zhang (4 papers)
  6. Bo He (32 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.