Stochastic Moving Anchor Algorithms and a Popov's Scheme with Moving Anchor (2506.07290v1)
Abstract: Since their introduction, anchoring methods in extragradient-type saddlepoint problems have inspired a flurry of research due to their ability to provide order-optimal rates of accelerated convergence in very general problem settings. Such guarantees are especially important as researchers consider problems in AI and ML, where large problem sizes demand immense computational power. Much of the more recent works explore theoretical aspects of this new acceleration framework, connecting it to existing methods and order-optimal convergence rates from the literature. However, in practice introducing stochastic oracles allows for more computational efficiency given the size of many modern optimization problems. To this end, this work provides the moving anchor variants [1] of the original anchoring algorithms [36] with stochastic implementations and robust analyses to bridge the gap from deterministic to stochastic algorithm settings. In particular, we demonstrate that an accelerated convergence rate theory for stochastic oracles also exists for our moving anchor scheme, itself a generalization of the original fixed anchor algorithms, and provide numerical results that validate our theoretical findings. We also develop a tentative moving anchor Popov scheme based on the work in [33], with promising numerical results pointing towards an as-of-yet uncovered general convergence theory for such methods.