Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Discriminator-Guided Model-Based Offline Imitation Learning (2207.00244v3)

Published 1 Jul 2022 in cs.LG and cs.AI

Abstract: Offline imitation learning (IL) is a powerful method to solve decision-making problems from expert demonstrations without reward labels. Existing offline IL methods suffer from severe performance degeneration under limited expert data. Including a learned dynamics model can potentially improve the state-action space coverage of expert data, however, it also faces challenging issues like model approximation/generalization errors and suboptimality of rollout data. In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and suboptimality of model rollout data against real expert demonstrations. DMIL adopts a novel cooperative-yet-adversarial learning strategy, which uses the discriminator to guide and couple the learning process of the policy and dynamics model, resulting in improved model performance and robustness. Our framework can also be extended to the case when demonstrations contain a large proportion of suboptimal data. Experimental results show that DMIL and its extension achieve superior performance and robustness compared to state-of-the-art offline IL methods under small datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Wenjia Zhang (23 papers)
  2. Haoran Xu (77 papers)
  3. Haoyi Niu (16 papers)
  4. Peng Cheng (229 papers)
  5. Ming Li (787 papers)
  6. Heming Zhang (13 papers)
  7. Guyue Zhou (68 papers)
  8. Xianyuan Zhan (47 papers)
Citations (12)