Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning (2205.11027v3)

Published 23 May 2022 in cs.LG, cs.AI, and cs.RO

Abstract: In offline reinforcement learning (RL), one detrimental issue to policy learning is the error accumulation of deep Q function in out-of-distribution (OOD) areas. Unfortunately, existing offline RL methods are often over-conservative, inevitably hurting generalization performance outside data distribution. In our study, one interesting observation is that deep Q functions approximate well inside the convex hull of training data. Inspired by this, we propose a new method, DOGE (Distance-sensitive Offline RL with better GEneralization). DOGE marries dataset geometry with deep function approximators in offline RL, and enables exploitation in generalizable OOD areas rather than strictly constraining policy within data distribution. Specifically, DOGE trains a state-conditioned distance function that can be readily plugged into standard actor-critic methods as a policy constraint. Simple yet elegant, our algorithm enjoys better generalization compared to state-of-the-art methods on D4RL benchmarks. Theoretical analysis demonstrates the superiority of our approach to existing methods that are solely based on data distribution or support constraints.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jianxiong Li (31 papers)
  2. Xianyuan Zhan (47 papers)
  3. Haoran Xu (77 papers)
  4. Xiangyu Zhu (85 papers)
  5. Jingjing Liu (139 papers)
  6. Ya-Qin Zhang (45 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.