Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Train Once, Get a Family: State-Adaptive Balances for Offline-to-Online Reinforcement Learning (2310.17966v2)

Published 27 Oct 2023 in cs.LG and cs.AI

Abstract: Offline-to-online reinforcement learning (RL) is a training paradigm that combines pre-training on a pre-collected dataset with fine-tuning in an online environment. However, the incorporation of online fine-tuning can intensify the well-known distributional shift problem. Existing solutions tackle this problem by imposing a policy constraint on the policy improvement objective in both offline and online learning. They typically advocate a single balance between policy improvement and constraints across diverse data collections. This one-size-fits-all manner may not optimally leverage each collected sample due to the significant variation in data quality across different states. To this end, we introduce Family Offline-to-Online RL (FamO2O), a simple yet effective framework that empowers existing algorithms to determine state-adaptive improvement-constraint balances. FamO2O utilizes a universal model to train a family of policies with different improvement/constraint intensities, and a balance model to select a suitable policy for each state. Theoretically, we prove that state-adaptive balances are necessary for achieving a higher policy performance upper bound. Empirically, extensive experiments show that FamO2O offers a statistically significant improvement over various existing methods, achieving state-of-the-art performance on the D4RL benchmark. Codes are available at https://github.com/LeapLabTHU/FamO2O.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shenzhi Wang (12 papers)
  2. Qisen Yang (13 papers)
  3. Jiawei Gao (10 papers)
  4. Matthieu Gaetan Lin (4 papers)
  5. Hao Chen (1006 papers)
  6. Liwei Wu (34 papers)
  7. Ning Jia (22 papers)
  8. Shiji Song (103 papers)
  9. Gao Huang (178 papers)
Citations (8)

Summary

We haven't generated a summary for this paper yet.