Convex Mixed Variational Inequality
- The convex mixed variational inequality problem is a framework that unifies monotone operator inclusions, convex constraints, and nonsmooth objectives, offering a robust approach to equilibrium modeling.
- Representative models span optimization over equilibrium sets, composite inclusions, and conically constrained saddle-point systems, with strong convergence guarantees.
- Advanced solution methods, including inertial proximal techniques and forward-backward schemes, ensure effective handling of large-scale structured problems.
A convex mixed variational inequality problem (mixed VI or MVIP) generalizes the classical variational inequality (VI) framework by unifying monotone operator inclusions, convex inequality constraints, and often nonsmooth objective terms. Such problems are fundamental in optimization, equilibrium modeling, saddle-point systems, and constrained convex analysis, supporting a spectrum of algorithmic and modeling innovations spanning smooth, nonsmooth, distributed, and large-scale contexts.
1. Formal Definition and Problem Structure
Let be a real Hilbert space with inner product , and be closed, convex, and nonempty. Let be single- or set-valued, typically monotone and continuous (or maximal monotone), and a proper, convex, lower semicontinuous function. The general convex mixed VI is
This encompasses standard VIs (when ), monotone inclusions, composite inclusion problems, and equilibrium constraints. Mixed VIs naturally cover instances where the feasible set is itself a convex program or a generalized Nash equilibrium. Solution concepts rely on monotonicity of and convexity of ; dual gap functions,
quantify infeasibility and provide termination criteria (Nwakpa et al., 23 Nov 2025, Nwakpa et al., 23 Nov 2025, Cruz et al., 2013).
2. Representative Models and Applications
Convex mixed VIs arise in the following contexts:
- Optimization over equilibrium and complementarity sets: Minimize a convex objective over the solution set of the monotone VI for all , with decomposed as a sum of monotone agent-specific maps. Applications include best Nash equilibria and generalized transportation networks (Kaushik et al., 2021, Kaushik et al., 2020).
- Composite inclusion models: Problems involving , where may be nonsmooth or set-valued, with convex constraints (e.g., in signal processing, convex feasibility, and non-smooth monotone inclusions) (Cruz et al., 2013).
- Conically constrained and saddle-point settings: Mixed VIs with conic constraints () and convex composite objectives, as in conic generalized Nash games or augmented Lagrangian saddle-point approaches (Zhao et al., 2023, Juditsky et al., 2021).
- Large-scale and structured domains: Problems on LMO-representable sets, such as matrix completion or robust learning, where expensive projections are replaced by linear minimization oracles (Juditsky et al., 2013).
3. Solution Algorithms: Classical and Contemporary Methods
Research on convex mixed VIs has led to a range of algorithmic frameworks:
a. Proximal and Inertial-Type Methods
Proximal-point and contraction schemes, often incorporating inertial (momentum) and correction terms, target monotone operators with composite nonsmooth structure. Recent variants utilize inertial extrapolation, dual correction, and relaxation to accelerate weak convergence (Nwakpa et al., 23 Nov 2025). The general inertial proximal-point method parameterizes iterates by
followed by a proximal inclusion: This achieves non-ergodic convergence in the residual under suitable monotonicity and parameter control (Chen et al., 2014).
b. Splitting and Relaxed-Projection Algorithms
Splitting strategies exploiting operator decomposability and constraints via separating hyperplanes avoid expensive multi-operator projections. The relaxed-projection splitting method alternates inner cycles of half-space projections (to enforce constraints defined by nonsmooth ) with blockwise monotone steps, converging weakly with only subgradient evaluations and no resolvent subproblems (Cruz et al., 2013).
c. Forward-Backward(-Forward) Dynamical Systems
Continuous-time and discretized forward-backward-forward schemes extend classical proximal dynamics to the mixed VI context. For -strongly pseudomonotone and convex , the system
enjoys global exponential stability toward the solution when is properly chosen, generalizing Lyapunov-based convergence beyond weak monotonicity (Nwakpa et al., 23 Nov 2025).
d. Iterative Regularization and Block Algorithms
Single-loop incremental and block-coordinate regularized gradient schemes, such as pair-IG (for agent-structured problems) and aRB-IRG (for Cartesian product sets), avoid inner-loop VI solves by blending step-size and regularization schedules within one iteration. For convex objectives and monotone VIs, both suboptimality and infeasibility decay at rates for pair-IG and aRB-IRG with properly balanced schedules over iterations (Kaushik et al., 2021, Kaushik et al., 2020).
| Algorithm | Key Ingredients | Complexity Rate/Type |
|---|---|---|
| Inertial Proximal | Extrapolation, weighted proximal step | (non-ergodic residual) |
| Splitting | Hyperplane projections, operator decomposition | Weak convergence |
| Forward-Backward | Prox/eval per step, Lyapunov analysis | Exponential (if strong pseudomonotone) |
| Single-loop Reg | On-the-fly regularization/stepsize | (non-asymptotic) |
4. Convergence Analysis and Rates
Convergence properties are tied to monotonicity, convexity, and regularity of the ingredients:
- Under monotonicity and standard coercivity, weak convergence to a solution is typical (e.g., Opial's lemma for Hilbert spaces) (Cruz et al., 2013, Nwakpa et al., 23 Nov 2025, Nwakpa et al., 23 Nov 2025).
- When is -strongly pseudomonotone, forward-backward-forward dynamics provide exponential rates (Nwakpa et al., 23 Nov 2025).
- Inertial methods with sufficiently small extrapolation parameters ensure decay in proximal residuals (Chen et al., 2014).
- Single-timescale incremental/regularized-gradient methods provide explicit non-asymptotic suboptimality and infeasibility rates and iteration complexity for achieving accuracy (Kaushik et al., 2021, Kaushik et al., 2020).
- Augmented Lagrangian ALAVI achieves global convergence and ergodic rate under monotonicity, and local linear rate under metric subregularity (Zhao et al., 2023).
5. Modeling, Representability, and Reduction to Standard Conic Programs
Juditsky and Nemirovski established that convex mixed VIs with monotone operators and elaborate constraint sets (including equalities, conic, and semidefinite inequalities) can be algorithmically reduced, via -conic representation, to standard conic optimization problems (Juditsky et al., 2021, Juditsky et al., 2013). Feasible sets and monotone operators are encoded as block conic systems, and the -approximate solution of the original mixed VI is found by solving a conic feasibility or dual-gap minimization problem. This reduction supports the use of off-the-shelf conic solvers (e.g., MOSEK, SDPT3) across a wide spectrum of modeling regimes, provided conic data is available.
6. Numerical Results and Practical Considerations
Practical effectiveness of contemporary convex mixed VI methods is illustrated in diverse applications:
- Pair-IG and aRB-IRG demonstrate rapid decay of both infeasibility and objective suboptimality for distributed Nash equilibria, stochastic transportation networks, and SVM training, providing wall-clock advantages over classical extragradient and incremental methods as data size grows (Kaushik et al., 2021, Kaushik et al., 2020).
- Relaxed-inertial-contraction proximal methods require fewer iterations and reduced CPU time than classical extragradient and projection-based competitors across electrical-circuit, pseudomonotone, and positive-definite test instances (Nwakpa et al., 23 Nov 2025).
- Forward–backward–forward dynamical systems achieve exponential convergence for strongly pseudomonotone VIs in practical -regularized logistic regression and low-dimensional geometric examples, with trajectories rapidly converging to equilibrium (Nwakpa et al., 23 Nov 2025).
- Augmented Lagrangian ALAVI scales to nonlinear and non-monotone VIs, with competitive complexity in both global and locally strongly-regular regimes (Zhao et al., 2023).
7. Outlook and Extensions
Recent developments enable efficient resolution of convex mixed VIs in large-scale, distributed, nonsmooth, and nonmonotone settings, with strong theoretical guarantees and scalable implementations. The conic-representability paradigm unifies algorithmic modeling, while advanced inertial, single-timescale, and superiorization strategies broaden the class of tractable problems. Nonetheless, open challenges remain for exact convergence in nonmonotone or nonconvex settings, parameter automation (e.g., for penalty, regularization, or inertia), and further computational improvements in high dimensions—especially for structures not easily amenable to conic reduction or proximal mapping.
Key references: (Nwakpa et al., 23 Nov 2025, Kaushik et al., 2021, Chen et al., 2014, Cruz et al., 2013, Nwakpa et al., 23 Nov 2025, Zhao et al., 2023, Kaushik et al., 2020, Juditsky et al., 2021, Juditsky et al., 2013, Nurminski, 2016).