Convergence of Batch Asynchronous Stochastic Approximation With Applications to Reinforcement Learning (2109.03445v6)
Abstract: We begin by briefly surveying some results on the convergence of the Stochastic Gradient Descent (SGD) Method, proved in a companion paper by the present authors. These results are based on viewing SGD as a version of Stochastic Approximation (SA). Ever since its introduction in the classic paper of Robbins and Monro in 1951, SA has become a standard tool for finding a solution of an equation of the form $f(\theta) = 0$, when only noisy measurements of $f(\cdot)$ are available. In most situations, \textit{every component} of the putative solution $\theta_t$ is updated at each step $t$. In some applications in Reinforcement Learning (RL), \textit{only one component} of $\theta_t$ is updated at each $t$. This is known as \textbf{asynchronous} SA. In this paper, we study \textbf{Block Asynchronous SA (BASA)}, in which, at each step $t$, \textit{some but not necessarily all} components of $\theta_t$ are updated. The theory presented here embraces both conventional (synchronous) SA as well as asynchronous SA, and all in-between possibilities. We provide sufficient conditions for the convergence of BASA, and also prove bounds on the \textit{rate} of convergence of $\theta_t$ to the solution. For the case of conventional SGD, these results reduce to those proved in our companion paper. Then we apply these results to the problem of finding a fixed point of a map with only noisy measurements. This problem arises frequently in RL. We prove sufficient conditions for convergence as well as estimates for the rate of convergence.