An ODE Method to Prove the Geometric Convergence of Adaptive Stochastic Algorithms (1811.06703v3)
Abstract: We consider stochastic algorithms derived from methods for solving deterministic optimization problems, especially comparison-based algorithms derived from stochastic approximation algorithms with a constant step-size. We develop a methodology for proving geometric convergence of the parameter sequence ${\theta_n}_{n\geq 0}$ of such algorithms. We employ the ordinary differential equation (ODE) method, which relates a stochastic algorithm to its mean ODE, along with a Lyapunov-like function $\Psi$ such that the geometric convergence of $\Psi(\theta_n)$ implies -- in the case of an optimization algorithm -- the geometric convergence of the expected distance between the optimum and the search point generated by the algorithm. We provide two sufficient conditions for $\Psi(\theta_n)$ to decrease at a geometric rate: $\Psi$ should decrease "exponentially" along the solution to the mean ODE, and the deviation between the stochastic algorithm and the ODE solution (measured by $\Psi$) should be bounded by $\Psi(\theta_n)$ times a constant. We also provide practical conditions under which the two sufficient conditions may be verified easily without knowing the solution of the mean ODE. Our results are any-time bounds on $\Psi(\theta_n)$, so we can deduce not only the asymptotic upper bound on the convergence rate, but also the first hitting time of the algorithm. The main results are applied to a comparison-based stochastic algorithm with a constant step-size for optimization on continuous domains.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.