Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

TD3 with Reverse KL Regularizer for Offline Reinforcement Learning from Mixed Datasets (2212.02125v1)

Published 5 Dec 2022 in stat.ML, cs.AI, and cs.LG

Abstract: We consider an offline reinforcement learning (RL) setting where the agent need to learn from a dataset collected by rolling out multiple behavior policies. There are two challenges for this setting: 1) The optimal trade-off between optimizing the RL signal and the behavior cloning (BC) signal changes on different states due to the variation of the action coverage induced by different behavior policies. Previous methods fail to handle this by only controlling the global trade-off. 2) For a given state, the action distribution generated by different behavior policies may have multiple modes. The BC regularizers in many previous methods are mean-seeking, resulting in policies that select out-of-distribution (OOD) actions in the middle of the modes. In this paper, we address both challenges by using adaptively weighted reverse Kullback-Leibler (KL) divergence as the BC regularizer based on the TD3 algorithm. Our method not only trades off the RL and BC signals with per-state weights (i.e., strong BC regularization on the states with narrow action coverage, and vice versa) but also avoids selecting OOD actions thanks to the mode-seeking property of reverse KL. Empirically, our algorithm can outperform existing offline RL algorithms in the MuJoCo locomotion tasks with the standard D4RL datasets as well as the mixed datasets that combine the standard datasets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yuanying Cai (4 papers)
  2. Chuheng Zhang (24 papers)
  3. Li Zhao (150 papers)
  4. Wei Shen (181 papers)
  5. Xuyun Zhang (21 papers)
  6. Lei Song (60 papers)
  7. Jiang Bian (229 papers)
  8. Tao Qin (201 papers)
  9. Tieyan Liu (4 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.