Papers
Topics
Authors
Recent
2000 character limit reached

Stochastic Variational Inequalities

Updated 11 January 2026
  • Stochastic variational inequalities are a framework for modeling optimization, equilibrium, and control problems under uncertainty by incorporating random operators into the problem formulation.
  • They unify classical variational inequalities and stochastic optimization, enabling analysis of equilibria in settings like games, network flows, and risk-averse models.
  • Stochastic algorithms such as projection methods and extragradient techniques offer robust convergence even for complex, large-scale problems with random constraints.

A stochastic variational inequality (SVI) is a mathematical framework for modeling equilibrium, optimization, and control problems under uncertainty, generalizing both stochastic optimization and classical variational inequalities. In an SVI, the mapping (operator) appearing in the variational inequality depends on random variables, typically entering through an expectation, and feasible sets may also be random or parametrized by random data. This unifies a broad spectrum of applications, from stochastic Nash equilibria and learning in games to stochastic control, network congestion, mechanics with noise, equilibrium in uncertain environments, and stochastic complementarity.

1. Mathematical Formulation and Representative Models

Let XRnX \subseteq \mathbb{R}^n be a (deterministic) closed convex set, and let ξ\xi be a random variable on a probability space (Ω,F,P)(\Omega, \mathcal{F}, \mathbb{P}). A stochastic variational inequality consists in finding xXx^* \in X such that

F(x),xx0,xX\langle F(x^*), x - x^* \rangle \geq 0, \quad \forall x \in X

where

F(x)=E[F^(x,ξ)]F(x) = \mathbb{E}[ \hat{F}(x, \xi) ]

and F^\hat{F} is a random operator. In the most general forms, both XX and FF can depend on ξ\xi, and multivalued mappings with additional convex subdifferential or normal cone terms may appear.

SVIs arise, for example, in

2. Existence, Uniqueness, and Structural Properties

Existence and uniqueness results for SVIs mirror those for classical VIs, but require additional care due to stochasticity and possible non-monotonicity. For monotone and Lipschitz mappings, the Kakutani–Fan–Glicksberg fixed-point theorems, together with weak compactness and upper-semicontinuity arguments, yield existence. Strong monotonicity ensures uniqueness.

Linear/Polynomial Growth and Quasi-Sharpness: For non-monotone mappings, the pp-quasi-sharpness property

F(u),uuμdist(u,U)p\langle F(u), u - u^* \rangle \geq \mu \, \mathrm{dist}(u, U^*)^p

with linear or polynomial growth of FF allows establishment of convergence and (in some cases) solution set regularity (Vankov et al., 2023).

Pseudomonotone SVIs: Extensions to pseudomonotone multi-stage or parametric SVIs are enabled by isomorphism to finite-dimensional deterministic VIs, under scenario-unfolding and careful mapping of measurability and convexity properties. The solution set of a pseudomonotone SVI is always convex and may be nonempty and compact under various recession or coercivity conditions (Cui et al., 2022).

3. Stochastic Algorithms and Convergence Rates

Projection and Extragradient Methods: The two most fundamental classes of algorithms for SVIs are stochastic projection-type methods and extragradient (mirror-prox, Popov, AMP) methods.

  • Stochastic Projection (SGD-like): Iterates are updated by xk+1=ΠX(xkγkF^(xk,ξk))x_{k+1} = \Pi_{X}(x_k - \gamma_k \hat{F}(x_k, \xi_k)), possibly with adaptive or clipped stepsizes for robustness in heavy-tailed or non-Lipschitz settings (Vankov et al., 2024).
  • Stochastic Mirror-Prox and Popov: These include the algorithmic templates

yk+1=PU(ukαkF^(hk,ξk)) hk+1=PU(yk+1αk+1F^(hk,ξk))\begin{aligned} y_{k+1} &= P_U(u_k - \alpha_k \hat{F}(h_k, \xi_k)) \ h_{k+1} &= P_U(y_{k+1} - \alpha_{k+1} \hat{F}(h_k, \xi_k)) \end{aligned}

and more generally, their mirror or Bregman-divergence-based analogues, often requiring only one or two stochastic oracle calls per iteration (Chakraborty et al., 31 Jul 2025, Vankov et al., 2023, Chen et al., 2014, Chakraborty et al., 16 Sep 2025).

  • Variance Reduction, Batching, and Acceleration: For finite-sum or empirical-mean SVIs, mini-batching, epochwise variance reduction, and momentum-accelerated mirror-prox techniques yield optimal oracle and communication complexity (Pichugin et al., 2024, Chen et al., 2014, Kovalev et al., 2022).

Convergence Rates:

  • For monotone and Lipschitz FF, stochastic mirror-prox and Popov methods achieve optimal O(1/N)\mathcal{O}(1/\sqrt{N}) rates for the (duality) gap in expectation, and O(1/N)\mathcal{O}(1/N) in deterministic smooth settings (Chakraborty et al., 31 Jul 2025, Chen et al., 2014).
  • For strongly-monotone VI, optimal O(κln(1/ϵ))\mathcal{O}(\kappa \ln(1/\epsilon)) iteration complexity is achieved, matching lower bounds (Huang et al., 2021).
  • Non-monotone, pp-quasi-sharp mappings yield almost-sure convergence under diminishing stepsizes and explicit last-iterate, non-asymptotic rates for p=2p=2 (Vankov et al., 2023, Vankov et al., 2024).

4. Dealing With Constraints and Large-Scale Structures

Stochastic VIs often arise with complex feasible sets:

  • Incremental Constraint Projections: For feasible sets defined as intersections, incremental or randomized projections onto one or a few components per iteration significantly reduce per-iteration cost, maintain convergence, and allow scaling to massive or online settings (Iusem et al., 2017, Chakraborty et al., 16 Sep 2025).
  • Randomized Feasibility-Update Algorithms: For X=i=1mXiX = \cap_{i=1}^m X_i, randomized extragradient and Popov schemes project onto a single XikX_{i_k} at each iteration, yet achieve O(1/N)\mathcal{O}(1/\sqrt{N}) convergence in the modified dual gap (Chakraborty et al., 16 Sep 2025). These algorithms maintain feasibility in expectation and render large-scale constraint handling tractable.

Distributed and Decentralized Algorithms: For networked and federated scenarios, the literature establishes optimal communication and local iteration complexity bounds under strong/monotone assumptions, using accelerated consensus, variance-reduced proximal updates, and multi-agent averaging [210

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Stochastic Variational Inequalities (VIs).