Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
112 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Time-Scale Separation in Q-Learning: Extending TD($\triangle$) for Action-Value Function Decomposition (2411.14019v1)

Published 21 Nov 2024 in cs.LG and stat.ML

Abstract: Q-Learning is a fundamental off-policy reinforcement learning (RL) algorithm that has the objective of approximating action-value functions in order to learn optimal policies. Nonetheless, it has difficulties in reconciling bias with variance, particularly in the context of long-term rewards. This paper introduces Q($\Delta$)-Learning, an extension of TD($\Delta$) for the Q-Learning framework. TD($\Delta$) facilitates efficient learning over several time scales by breaking the Q($\Delta$)-function into distinct discount factors. This approach offers improved learning stability and scalability, especially for long-term tasks where discounting bias may impede convergence. Our methodology guarantees that each element of the Q($\Delta$)-function is acquired individually, facilitating expedited convergence on shorter time scales and enhancing the learning of extended time scales. We demonstrate through theoretical analysis and practical evaluations on standard benchmarks like Atari that Q($\Delta$)-Learning surpasses conventional Q-Learning and TD learning methods in both tabular and deep RL environments.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets