Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

$f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning (2010.01207v2)

Published 2 Oct 2020 in cs.LG, cs.AI, and stat.ML

Abstract: Imitation learning (IL) aims to learn a policy from expert demonstrations that minimizes the discrepancy between the learner and expert behaviors. Various imitation learning algorithms have been proposed with different pre-determined divergences to quantify the discrepancy. This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency? In this work, we propose $f$-GAIL, a new generative adversarial imitation learning (GAIL) model, that automatically learns a discrepancy measure from the $f$-divergence family as well as a policy capable of producing expert-like behaviors. Compared with IL baselines with various predefined divergence measures, $f$-GAIL learns better policies with higher data efficiency in six physics-based control tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xin Zhang (904 papers)
  2. Yanhua Li (19 papers)
  3. Ziming Zhang (59 papers)
  4. Zhi-Li Zhang (32 papers)
Citations (29)