Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Generative Adversarial Imitation Learning with Neural Networks: Global Optimality and Convergence Rate (2003.03709v2)

Published 8 Mar 2020 in cs.LG, math.OC, and stat.ML

Abstract: Generative adversarial imitation learning (GAIL) demonstrates tremendous success in practice, especially when combined with neural networks. Different from reinforcement learning, GAIL learns both policy and reward function from expert (human) demonstration. Despite its empirical success, it remains unclear whether GAIL with neural networks converges to the globally optimal solution. The major difficulty comes from the nonconvex-nonconcave minimax optimization structure. To bridge the gap between practice and theory, we analyze a gradient-based algorithm with alternating updates and establish its sublinear convergence to the globally optimal solution. To the best of our knowledge, our analysis establishes the global optimality and convergence rate of GAIL with neural networks for the first time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yufeng Zhang (67 papers)
  2. Qi Cai (40 papers)
  3. Zhuoran Yang (155 papers)
  4. Zhaoran Wang (164 papers)
Citations (11)