Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble (2107.00591v2)

Published 1 Jul 2021 in cs.RO and cs.LG

Abstract: Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets. However, depending on the quality of the trained agents and the application being considered, it is often desirable to fine-tune such agents via further online interactions. In this paper, we observe that state-action distribution shift may lead to severe bootstrap error during fine-tuning, which destroys the good initial policy obtained via offline RL. To address this issue, we first propose a balanced replay scheme that prioritizes samples encountered online while also encouraging the use of near-on-policy samples from the offline dataset. Furthermore, we leverage multiple Q-functions trained pessimistically offline, thereby preventing overoptimism concerning unfamiliar actions at novel states during the initial training phase. We show that the proposed method improves sample-efficiency and final performance of the fine-tuned robotic agents on various locomotion and manipulation tasks. Our code is available at: https://github.com/shlee94/Off2OnRL.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Seunghyun Lee (60 papers)
  2. Younggyo Seo (25 papers)
  3. Kimin Lee (69 papers)
  4. Pieter Abbeel (372 papers)
  5. Jinwoo Shin (196 papers)
Citations (149)

Summary

We haven't generated a summary for this paper yet.