Riemannian Stochastic Hybrid Gradient Algorithm for Nonconvex Optimization (2109.04289v3)
Abstract: In recent years, Riemannian stochastic gradient descent (R-SGD), Riemannian stochastic variance reduction (R-SVRG) and Riemannian stochastic recursive gradient (R-SRG) have attracted considerable attention on Riemannian optimization. Under normal circumstances, it is impossible to analyze the convergence of R-SRG algorithm alone. The main reason is that the conditional expectation of the descending direction is a biased estimation. However, in this paper, we consider linear combination of three descent directions on Riemannian manifolds as the new descent direction (i.e., R-SRG, R-SVRG and R-SGD) and the parameters are time-varying. At first, we propose a Riemannian stochastic hybrid gradient(R-SHG) algorithm with adaptive parameters. The algorithm gets a global convergence analysis with a decaying step size. For the case of step-size is fixed, we consider two cases with the inner loop fixed and time-varying. Meanwhile, we quantitatively research the convergence speed of the algorithm. Since the global convergence of the R-SHG algorithm with adaptive parameters requires higher functional differentiability, we propose a R-SHG algorithm with time-varying parameters. And we obtain similar conclusions under weaker conditions.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.