Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size (2501.18164v2)
Abstract: We have theoretically analyzed the use of Riemannian stochastic gradient descent (RSGD) and found that using an increasing batch size leads to faster RSGD convergence rate than using a constant batch size not only with a constant learning rate but also with a decaying learning rate, such as cosine annealing decay and polynomial decay. The convergence rate of RSGD improves from $O(\sqrt{T{-1}+\text{const.}})$ with a constant batch size to $O(T{-\frac{1}{2}})$ with an increasing batch size, where $T$ denotes the number of iterations. Using principal component analysis and low-rank matrix completion tasks, we investigated, both theoretically and numerically, how increasing batch size affects computational time as measured by stochastic first-order oracle (SFO) complexity. Increasing batch size reduces the SFO complexity of RSGD. Furthermore, our numerical results demonstrated that increasing batch size offers the advantages of both small and large constant batch sizes.