Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Offline Reinforcement Learning with Residual Generative Modeling (2106.10411v2)

Published 19 Jun 2021 in cs.LG and cs.AI

Abstract: Offline reinforcement learning (RL) tries to learn the near-optimal policy with recorded offline experience without online exploration. Current offline RL research includes: 1) generative modeling, i.e., approximating a policy using fixed data; and 2) learning the state-action value function. While most research focuses on the state-action function part through reducing the bootstrapping error in value function approximation induced by the distribution shift of training data, the effects of error propagation in generative modeling have been neglected. In this paper, we analyze the error in generative modeling. We propose AQL (action-conditioned Q-learning), a residual generative model to reduce policy approximation error for offline RL. We show that our method can learn more accurate policy approximations in different benchmark datasets. In addition, we show that the proposed offline RL method can learn more competitive AI agents in complex control tasks under the multiplayer online battle arena (MOBA) game Honor of Kings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Hua Wei (71 papers)
  2. Deheng Ye (50 papers)
  3. Zhao Liu (97 papers)
  4. Hao Wu (623 papers)
  5. Bo Yuan (151 papers)
  6. Qiang Fu (159 papers)
  7. Wei Yang (349 papers)
  8. Zhenhui Li (34 papers)
Citations (9)

Summary

We haven't generated a summary for this paper yet.