Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning (2102.00714v2)

Published 1 Feb 2021 in cs.LG and cs.AI

Abstract: Offline reinforcement learning (RL) aims at learning a good policy from a batch of collected data, without extra interactions with the environment during training. However, current offline RL benchmarks commonly have a large reality gap, because they involve large datasets collected by highly exploratory policies, and the trained policy is directly evaluated in the environment. In real-world situations, running a highly exploratory policy is prohibited to ensure system safety, the data is commonly very limited, and a trained policy should be well validated before deployment. In this paper, we present a near real-world offline RL benchmark, named NeoRL, which contains datasets from various domains with controlled sizes, and extra test datasets for policy validation. We evaluate existing offline RL algorithms on NeoRL and argue that the performance of a policy should also be compared with the deterministic version of the behavior policy, instead of the dataset reward. The empirical results demonstrate that the tested offline RL algorithms become less competitive to the deterministic policy on many datasets, and the offline policy evaluation hardly helps. The NeoRL suit can be found at http://polixir.ai/research/neorl. We hope this work will shed some light on future research and draw more attention when deploying RL in real-world systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Rongjun Qin (47 papers)
  2. Songyi Gao (2 papers)
  3. Xingyuan Zhang (6 papers)
  4. Zhen Xu (76 papers)
  5. Shengkai Huang (1 paper)
  6. Zewen Li (6 papers)
  7. Weinan Zhang (322 papers)
  8. Yang Yu (385 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.