Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Estimation Error Correction in Deep Reinforcement Learning for Deterministic Actor-Critic Methods (2109.10736v2)

Published 22 Sep 2021 in cs.LG, cs.AI, and stat.ML

Abstract: In value-based deep reinforcement learning methods, approximation of value functions induces overestimation bias and leads to suboptimal policies. We show that in deep actor-critic methods that aim to overcome the overestimation bias, if the reinforcement signals received by the agent have a high variance, a significant underestimation bias arises. To minimize the underestimation, we introduce a parameter-free, novel deep Q-learning variant. Our Q-value update rule combines the notions behind Clipped Double Q-learning and Maxmin Q-learning by computing the critic objective through the nested combination of maximum and minimum operators to bound the approximate value estimates. We evaluate our modification on the suite of several OpenAI Gym continuous control tasks, improving the state-of-the-art in every environment tested.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Baturay Saglam (12 papers)
  2. Enes Duran (4 papers)
  3. Dogan C. Cicek (6 papers)
  4. Furkan B. Mutlu (7 papers)
  5. Suleyman S. Kozat (50 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.