Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extragradient method with variance reduction for stochastic variational inequalities (1703.00260v1)

Published 1 Mar 2017 in math.OC

Abstract: We propose an extragradient method with stepsizes bounded away from zero for stochastic variational inequalities requiring only pseudo-monotonicity. We provide convergence and complexity analysis, allowing for an unbounded feasible set, unbounded operator, non-uniform variance of the oracle and, also, we do not require any regularization. Alongside the stochastic approximation procedure, we iteratively reduce the variance of the stochastic error. Our method attains the optimal oracle complexity $\mathcal{O}(1/\epsilon2)$ (up to a logarithmic term) and a faster rate $\mathcal{O}(1/K)$ in terms of the mean (quadratic) natural residual and the D-gap function, where $K$ is the number of iterations required for a given tolerance $\epsilon>0$. Such convergence rate represents an acceleration with respect to the stochastic error. The generated sequence also enjoys a new feature: the sequence is bounded in $Lp$ if the stochastic error has finite $p$-moment. Explicit estimates for the convergence rate, the oracle complexity and the $p$-moments are given depending on problem parameters and distance of the initial iterate to the solution set. Moreover, sharper constants are possible if the variance is uniform over the solution set or the feasible set. Our results provide new classes of stochastic variational inequalities for which a convergence rate of $\mathcal{O}(1/K)$ holds in terms of the mean-squared distance to the solution set. Our analysis includes the distributed solution of pseudo-monotone Cartesian variational inequalities under partial coordination of parameters between users of a network.

Summary

We haven't generated a summary for this paper yet.