Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction (2006.03041v3)

Published 4 Jun 2020 in cs.LG, eess.SP, math.OC, math.ST, stat.ML, and stat.TH

Abstract: Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a $\gamma$-discounted MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$, we demonstrate that the $\ell_{\infty}$-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\varepsilon$-accurate estimate of the Q-function --- is at most on the order of $\frac{1}{\mu_{\min}(1-\gamma)5\varepsilon2}+ \frac{t_{mix}}{\mu_{\min}(1-\gamma)}$ up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, $t_{mix}$ and $\mu_{\min}$ denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the sample complexity in the synchronous case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the cost taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result \cite{qu2020finite} by a factor of at least $|\mathcal{S}||\mathcal{A}|$ for all scenarios, and by a factor of at least $t_{mix}|\mathcal{S}||\mathcal{A}|$ for any sufficiently small accuracy level $\varepsilon$. Further, we demonstrate that the scaling on the effective horizon $\frac{1}{1-\gamma}$ can be improved by means of variance reduction.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Gen Li (143 papers)
  2. Yuting Wei (47 papers)
  3. Yuejie Chi (109 papers)
  4. Yuantao Gu (60 papers)
  5. Yuxin Chen (195 papers)
Citations (111)

Summary

We haven't generated a summary for this paper yet.