Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Combating False Negatives in Adversarial Imitation Learning (2002.00412v1)

Published 2 Feb 2020 in cs.LG, cs.AI, and stat.ML

Abstract: In adversarial imitation learning, a discriminator is trained to differentiate agent episodes from expert demonstrations representing the desired behavior. However, as the trained policy learns to be more successful, the negative examples (the ones produced by the agent) become increasingly similar to expert ones. Despite the fact that the task is successfully accomplished in some of the agent's trajectories, the discriminator is trained to output low values for them. We hypothesize that this inconsistent training signal for the discriminator can impede its learning, and consequently leads to worse overall performance of the agent. We show experimental evidence for this hypothesis and that the 'False Negatives' (i.e. successful agent episodes) significantly hinder adversarial imitation learning, which is the first contribution of this paper. Then, we propose a method to alleviate the impact of false negatives and test it on the BabyAI environment. This method consistently improves sample efficiency over the baselines by at least an order of magnitude.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Konrad Zolna (24 papers)
  2. Chitwan Saharia (16 papers)
  3. Leonard Boussioux (12 papers)
  4. David Yu-Tung Hui (3 papers)
  5. Maxime Chevalier-Boisvert (13 papers)
  6. Dzmitry Bahdanau (46 papers)
  7. Yoshua Bengio (601 papers)
Citations (7)

Summary

We haven't generated a summary for this paper yet.