Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Context Shift Reduction for Offline Meta-Reinforcement Learning (2311.03695v1)

Published 7 Nov 2023 in cs.LG and cs.AI

Abstract: Offline meta-reinforcement learning (OMRL) utilizes pre-collected offline datasets to enhance the agent's generalization ability on unseen tasks. However, the context shift problem arises due to the distribution discrepancy between the contexts used for training (from the behavior policy) and testing (from the exploration policy). The context shift problem leads to incorrect task inference and further deteriorates the generalization ability of the meta-policy. Existing OMRL methods either overlook this problem or attempt to mitigate it with additional information. In this paper, we propose a novel approach called Context Shift Reduction for OMRL (CSRO) to address the context shift problem with only offline datasets. The key insight of CSRO is to minimize the influence of policy in context during both the meta-training and meta-test phases. During meta-training, we design a max-min mutual information representation learning mechanism to diminish the impact of the behavior policy on task representation. In the meta-test phase, we introduce the non-prior context collection strategy to reduce the effect of the exploration policy. Experimental results demonstrate that CSRO significantly reduces the context shift and improves the generalization ability, surpassing previous methods across various challenging domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Yunkai Gao (5 papers)
  2. Rui Zhang (1138 papers)
  3. Jiaming Guo (37 papers)
  4. Fan Wu (264 papers)
  5. Qi Yi (18 papers)
  6. Shaohui Peng (20 papers)
  7. Siming Lan (3 papers)
  8. Ruizhi Chen (22 papers)
  9. Zidong Du (41 papers)
  10. Xing Hu (122 papers)
  11. Qi Guo (237 papers)
  12. Ling Li (112 papers)
  13. Yunji Chen (51 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.