Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Bayesian Design Principles for Offline-to-Online Reinforcement Learning (2405.20984v1)

Published 31 May 2024 in cs.LG

Abstract: Offline reinforcement learning (RL) is crucial for real-world applications where exploration can be costly or unsafe. However, offline learned policies are often suboptimal, and further online fine-tuning is required. In this paper, we tackle the fundamental dilemma of offline-to-online fine-tuning: if the agent remains pessimistic, it may fail to learn a better policy, while if it becomes optimistic directly, performance may suffer from a sudden drop. We show that Bayesian design principles are crucial in solving such a dilemma. Instead of adopting optimistic or pessimistic policies, the agent should act in a way that matches its belief in optimal policies. Such a probability-matching agent can avoid a sudden performance drop while still being guaranteed to find the optimal policy. Based on our theoretical findings, we introduce a novel algorithm that outperforms existing methods on various benchmarks, demonstrating the efficacy of our approach. Overall, the proposed approach provides a new perspective on offline-to-online RL that has the potential to enable more effective learning from offline data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Hao Hu (114 papers)
  2. Yiqin Yang (14 papers)
  3. Jianing Ye (7 papers)
  4. Chengjie Wu (8 papers)
  5. Ziqing Mai (2 papers)
  6. Yujing Hu (28 papers)
  7. Tangjie Lv (35 papers)
  8. Changjie Fan (79 papers)
  9. Qianchuan Zhao (28 papers)
  10. Chongjie Zhang (68 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.