Papers
Topics
Authors
Recent
2000 character limit reached

Convex Mixed Variational Inequality

Updated 1 December 2025
  • The convex mixed variational inequality problem is a framework that unifies monotone operator inclusions, convex constraints, and nonsmooth objectives, offering a robust approach to equilibrium modeling.
  • Representative models span optimization over equilibrium sets, composite inclusions, and conically constrained saddle-point systems, with strong convergence guarantees.
  • Advanced solution methods, including inertial proximal techniques and forward-backward schemes, ensure effective handling of large-scale structured problems.

A convex mixed variational inequality problem (mixed VI or MVIP) generalizes the classical variational inequality (VI) framework by unifying monotone operator inclusions, convex inequality constraints, and often nonsmooth objective terms. Such problems are fundamental in optimization, equilibrium modeling, saddle-point systems, and constrained convex analysis, supporting a spectrum of algorithmic and modeling innovations spanning smooth, nonsmooth, distributed, and large-scale contexts.

1. Formal Definition and Problem Structure

Let HH be a real Hilbert space with inner product ,\langle \cdot, \cdot \rangle, and CHC\subset H be closed, convex, and nonempty. Let T:HHT:H\to H be single- or set-valued, typically monotone and continuous (or maximal monotone), and g:C(,+]g:C\to(-\infty,+\infty] a proper, convex, lower semicontinuous function. The general convex mixed VI is

find xC such that Tx,ux+g(u)g(x)0,uC.\text{find }x^*\in C\text{ such that }\langle T x^*, u-x^* \rangle + g(u) - g(x^*) \ge 0,\quad \forall u\in C.

This encompasses standard VIs (when g0g\equiv0), monotone inclusions, composite inclusion problems, and equilibrium constraints. Mixed VIs naturally cover instances where the feasible set is itself a convex program or a generalized Nash equilibrium. Solution concepts rely on monotonicity of TT and convexity of gg; dual gap functions,

Gap(x):=supyCT(y),xy,\text{Gap}(x) := \sup_{y\in C} \langle T(y), x-y\rangle,

quantify infeasibility and provide termination criteria (Nwakpa et al., 23 Nov 2025, Nwakpa et al., 23 Nov 2025, Cruz et al., 2013).

2. Representative Models and Applications

Convex mixed VIs arise in the following contexts:

  • Optimization over equilibrium and complementarity sets: Minimize a convex objective f(x)f(x) over the solution set SOL(Q,F)\text{SOL}(Q,F) of the monotone VI F(x),yx0\langle F(x^*), y-x^* \rangle \geq 0 for all yQy\in Q, with FF decomposed as a sum of monotone agent-specific maps. Applications include best Nash equilibria and generalized transportation networks (Kaushik et al., 2021, Kaushik et al., 2020).
  • Composite inclusion models: Problems involving T=iAiT = \sum_i A_i, where AiA_i may be nonsmooth or set-valued, with convex constraints g(x)0g(x)\leq0 (e.g., in signal processing, convex feasibility, and non-smooth monotone inclusions) (Cruz et al., 2013).
  • Conically constrained and saddle-point settings: Mixed VIs with conic constraints (Θ(x)C\Theta(x)\in-\mathcal{C}) and convex composite objectives, as in conic generalized Nash games or augmented Lagrangian saddle-point approaches (Zhao et al., 2023, Juditsky et al., 2021).
  • Large-scale and structured domains: Problems on LMO-representable sets, such as matrix completion or robust learning, where expensive projections are replaced by linear minimization oracles (Juditsky et al., 2013).

3. Solution Algorithms: Classical and Contemporary Methods

Research on convex mixed VIs has led to a range of algorithmic frameworks:

a. Proximal and Inertial-Type Methods

Proximal-point and contraction schemes, often incorporating inertial (momentum) and correction terms, target monotone operators with composite nonsmooth structure. Recent variants utilize inertial extrapolation, dual correction, and relaxation to accelerate weak convergence (Nwakpa et al., 23 Nov 2025). The general inertial proximal-point method parameterizes iterates by

wˉk:=wk+αk(wkwk1),\bar w^k:=w^k +\alpha_k(w^k-w^{k-1}),

followed by a proximal inclusion: ϕ(w)ϕ(wk+1)+wwk+1,F(wk+1)+1λkG(wk+1wˉk)0.\phi(w)-\phi(w^{k+1}) + \langle w-w^{k+1}, F(w^{k+1}) + \frac{1}{\lambda_k}G(w^{k+1}-\bar w^k)\rangle \geq 0. This achieves non-ergodic o(1/k)o(1/k) convergence in the residual under suitable monotonicity and parameter control (Chen et al., 2014).

b. Splitting and Relaxed-Projection Algorithms

Splitting strategies exploiting operator decomposability and constraints via separating hyperplanes avoid expensive multi-operator projections. The relaxed-projection splitting method alternates inner cycles of half-space projections (to enforce constraints defined by nonsmooth gg) with blockwise monotone steps, converging weakly with only subgradient evaluations and no resolvent subproblems (Cruz et al., 2013).

c. Forward-Backward(-Forward) Dynamical Systems

Continuous-time and discretized forward-backward-forward schemes extend classical proximal dynamics to the mixed VI context. For hh-strongly pseudomonotone TT and convex hh, the system

y(t)=proxλh(x(t)λT(x(t))),x˙(t)+x(t)=y(t)+λ(T(x(t))T(y(t)))y(t) = \mathrm{prox}_{\lambda h}(x(t) - \lambda T(x(t))),\quad \dot x(t) + x(t) = y(t) + \lambda(T(x(t)) - T(y(t)))

enjoys global exponential stability toward the solution when λ\lambda is properly chosen, generalizing Lyapunov-based convergence beyond weak monotonicity (Nwakpa et al., 23 Nov 2025).

d. Iterative Regularization and Block Algorithms

Single-loop incremental and block-coordinate regularized gradient schemes, such as pair-IG (for agent-structured problems) and aRB-IRG (for Cartesian product sets), avoid inner-loop VI solves by blending step-size and regularization schedules within one iteration. For convex objectives and monotone VIs, both suboptimality and infeasibility decay at rates O(N1/4)O(N^{-1/4}) for pair-IG and aRB-IRG with properly balanced schedules over NN iterations (Kaushik et al., 2021, Kaushik et al., 2020).

Algorithm Key Ingredients Complexity Rate/Type
Inertial Proximal Extrapolation, weighted proximal step o(1/k)o(1/k) (non-ergodic residual)
Splitting Hyperplane projections, operator decomposition Weak convergence
Forward-Backward Prox/eval per step, Lyapunov analysis Exponential (if strong pseudomonotone)
Single-loop Reg On-the-fly regularization/stepsize O(N1/4)O(N^{-1/4}) (non-asymptotic)

4. Convergence Analysis and Rates

Convergence properties are tied to monotonicity, convexity, and regularity of the ingredients:

  • Under monotonicity and standard coercivity, weak convergence to a solution is typical (e.g., Opial's lemma for Hilbert spaces) (Cruz et al., 2013, Nwakpa et al., 23 Nov 2025, Nwakpa et al., 23 Nov 2025).
  • When TT is hh-strongly pseudomonotone, forward-backward-forward dynamics provide exponential rates (Nwakpa et al., 23 Nov 2025).
  • Inertial methods with sufficiently small extrapolation parameters αk<1/3\alpha_k<1/3 ensure o(1/k)o(1/k) decay in proximal residuals (Chen et al., 2014).
  • Single-timescale incremental/regularized-gradient methods provide explicit non-asymptotic suboptimality and infeasibility rates O(N1/4)O(N^{-1/4}) and O(ε4)O(\varepsilon^{-4}) iteration complexity for achieving accuracy ε\varepsilon (Kaushik et al., 2021, Kaushik et al., 2020).
  • Augmented Lagrangian ALAVI achieves o(1/k)o(1/\sqrt{k}) global convergence and O(1/k)O(1/k) ergodic rate under monotonicity, and local linear rate under metric subregularity (Zhao et al., 2023).

5. Modeling, Representability, and Reduction to Standard Conic Programs

Juditsky and Nemirovski established that convex mixed VIs with monotone operators and elaborate constraint sets (including equalities, conic, and semidefinite inequalities) can be algorithmically reduced, via KK-conic representation, to standard conic optimization problems (Juditsky et al., 2021, Juditsky et al., 2013). Feasible sets and monotone operators are encoded as block conic systems, and the ε\varepsilon-approximate solution of the original mixed VI is found by solving a conic feasibility or dual-gap minimization problem. This reduction supports the use of off-the-shelf conic solvers (e.g., MOSEK, SDPT3) across a wide spectrum of modeling regimes, provided conic data is available.

6. Numerical Results and Practical Considerations

Practical effectiveness of contemporary convex mixed VI methods is illustrated in diverse applications:

  • Pair-IG and aRB-IRG demonstrate rapid decay of both infeasibility and objective suboptimality for distributed Nash equilibria, stochastic transportation networks, and SVM training, providing wall-clock advantages over classical extragradient and incremental methods as data size grows (Kaushik et al., 2021, Kaushik et al., 2020).
  • Relaxed-inertial-contraction proximal methods require fewer iterations and reduced CPU time than classical extragradient and projection-based competitors across electrical-circuit, pseudomonotone, and positive-definite test instances (Nwakpa et al., 23 Nov 2025).
  • Forward–backward–forward dynamical systems achieve exponential convergence for strongly pseudomonotone VIs in practical 1\ell_1-regularized logistic regression and low-dimensional geometric examples, with trajectories rapidly converging to equilibrium (Nwakpa et al., 23 Nov 2025).
  • Augmented Lagrangian ALAVI scales to nonlinear and non-monotone VIs, with competitive complexity in both global and locally strongly-regular regimes (Zhao et al., 2023).

7. Outlook and Extensions

Recent developments enable efficient resolution of convex mixed VIs in large-scale, distributed, nonsmooth, and nonmonotone settings, with strong theoretical guarantees and scalable implementations. The conic-representability paradigm unifies algorithmic modeling, while advanced inertial, single-timescale, and superiorization strategies broaden the class of tractable problems. Nonetheless, open challenges remain for exact convergence in nonmonotone or nonconvex settings, parameter automation (e.g., for penalty, regularization, or inertia), and further computational improvements in high dimensions—especially for structures not easily amenable to conic reduction or proximal mapping.

Key references: (Nwakpa et al., 23 Nov 2025, Kaushik et al., 2021, Chen et al., 2014, Cruz et al., 2013, Nwakpa et al., 23 Nov 2025, Zhao et al., 2023, Kaushik et al., 2020, Juditsky et al., 2021, Juditsky et al., 2013, Nurminski, 2016).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Convex Mixed Variational Inequality Problem.