Imitating Unknown Policies via Exploration (2008.05660v1)
Abstract: Behavioral cloning is an imitation learning technique that teaches an agent how to behave through expert demonstrations. Recent approaches use self-supervision of fully-observable unlabeled snapshots of the states to decode state-pairs into actions. However, the iterative learning scheme from these techniques are prone to getting stuck into bad local minima. We address these limitations incorporating a two-phase model into the original framework, which learns from unlabeled observations via exploration, substantially improving traditional behavioral cloning by exploiting (i) a sampling mechanism to prevent bad local minima, (ii) a sampling mechanism to improve exploration, and (iii) self-attention modules to capture global features. The resulting technique outperforms the previous state-of-the-art in four different environments by a large margin.
- Nathan Gavenski (7 papers)
- Juarez Monteiro (6 papers)
- Roger Granada (11 papers)
- Felipe Meneguzzi (28 papers)
- Rodrigo C. Barros (12 papers)