Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Estimation Bias in Double Q-Learning (2109.14419v3)

Published 29 Sep 2021 in cs.LG, cs.AI, and stat.ML

Abstract: Double Q-learning is a classical method for reducing overestimation bias, which is caused by taking maximum estimated values in the BeLLMan operation. Its variants in the deep Q-learning paradigm have shown great promise in producing reliable value prediction and improving learning performance. However, as shown by prior work, double Q-learning is not fully unbiased and suffers from underestimation bias. In this paper, we show that such underestimation bias may lead to multiple non-optimal fixed points under an approximate BeLLMan operator. To address the concerns of converging to non-optimal stationary solutions, we propose a simple but effective approach as a partial fix for the underestimation bias in double Q-learning. This approach leverages an approximate dynamic programming to bound the target value. We extensively evaluate our proposed method in the Atari benchmark tasks and demonstrate its significant improvement over baseline algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zhizhou Ren (13 papers)
  2. Guangxiang Zhu (8 papers)
  3. Hao Hu (114 papers)
  4. Beining Han (11 papers)
  5. Jianglun Chen (1 paper)
  6. Chongjie Zhang (68 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.