Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MICRO: Model-Based Offline Reinforcement Learning with a Conservative Bellman Operator (2312.03991v2)

Published 7 Dec 2023 in cs.LG and cs.AI

Abstract: Offline reinforcement learning (RL) faces a significant challenge of distribution shift. Model-free offline RL penalizes the Q value for out-of-distribution (OOD) data or constrains the policy closed to the behavior policy to tackle this problem, but this inhibits the exploration of the OOD region. Model-based offline RL, which uses the trained environment model to generate more OOD data and performs conservative policy optimization within that model, has become an effective method for this problem. However, the current model-based algorithms rarely consider agent robustness when incorporating conservatism into policy. Therefore, the new model-based offline algorithm with a conservative BeLLMan operator (MICRO) is proposed. This method trades off performance and robustness via introducing the robust BeLLMan operator into the algorithm. Compared with previous model-based algorithms with robust adversarial models, MICRO can significantly reduce the computation cost by only choosing the minimal Q value in the state uncertainty set. Extensive experiments demonstrate that MICRO outperforms prior RL algorithms in offline RL benchmark and is considerably robust to adversarial perturbations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiao-Yin Liu (5 papers)
  2. Xiao-Hu Zhou (18 papers)
  3. Hao Li (803 papers)
  4. Mei-Jiang Gui (10 papers)
  5. Tian-Yu Xiang (9 papers)
  6. De-Xing Huang (7 papers)
  7. Zeng-Guang Hou (25 papers)
  8. Guotao Li (3 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.