Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Offline Reinforcement Learning as Anti-Exploration (2106.06431v1)

Published 11 Jun 2021 in cs.LG

Abstract: Offline Reinforcement Learning (RL) aims at learning an optimal control from a fixed dataset, without interactions with the system. An agent in this setting should avoid selecting actions whose consequences cannot be predicted from the data. This is the converse of exploration in RL, which favors such actions. We thus take inspiration from the literature on bonus-based exploration to design a new offline RL agent. The core idea is to subtract a prediction-based exploration bonus from the reward, instead of adding it for exploration. This allows the policy to stay close to the support of the dataset. We connect this approach to a more common regularization of the learned policy towards the data. Instantiated with a bonus based on the prediction error of a variational autoencoder, we show that our agent is competitive with the state of the art on a set of continuous control locomotion and manipulation tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shideh Rezaeifar (11 papers)
  2. Robert Dadashi (25 papers)
  3. Nino Vieillard (22 papers)
  4. Léonard Hussenot (25 papers)
  5. Olivier Bachem (52 papers)
  6. Olivier Pietquin (90 papers)
  7. Matthieu Geist (93 papers)
Citations (47)

Summary

We haven't generated a summary for this paper yet.