Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On Value Discrepancy of Imitation Learning (1911.07027v1)

Published 16 Nov 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Imitation learning trains a policy from expert demonstrations. Imitation learning approaches have been designed from various principles, such as behavioral cloning via supervised learning, apprenticeship learning via inverse reinforcement learning, and GAIL via generative adversarial learning. In this paper, we propose a framework to analyze the theoretical property of imitation learning approaches based on discrepancy propagation analysis. Under the infinite-horizon setting, the framework leads to the value discrepancy of behavioral cloning in an order of O((1-\gamma){-2}). We also show that the framework leads to the value discrepancy of GAIL in an order of O((1-\gamma){-1}). It implies that GAIL has less compounding errors than behavioral cloning, which is also verified empirically in this paper. To the best of our knowledge, we are the first one to analyze GAIL's performance theoretically. The above results indicate that the proposed framework is a general tool to analyze imitation learning approaches. We hope our theoretical results can provide insights for future improvements in imitation learning algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tian Xu (41 papers)
  2. Ziniu Li (24 papers)
  3. Yang Yu (385 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.