Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Incremental Gradient Method for Optimization Problems with Variational Inequality Constraints (2105.14205v2)

Published 29 May 2021 in math.OC

Abstract: We consider minimizing a sum of agent-specific nondifferentiable merely convex functions over the solution set of a variational inequality (VI) problem in that each agent is associated with a local monotone mapping. This problem finds an application in computation of the best equilibrium in nonlinear complementarity problems arising in transportation networks. We develop an iteratively regularized incremental gradient method where at each iteration, agents communicate over a cycle graph to update their solution iterates using their local information about the objective and the mapping. The proposed method is single-timescale in the sense that it does not involve any excessive hard-to-project computation per iteration. We derive non-asymptotic agent-wise convergence rates for the suboptimality of the global objective function and infeasibility of the VI constraints measured by a suitably defined dual gap function. The proposed method appears to be the first fully iterative scheme equipped with iteration complexity that can address distributed optimization problems with VI constraints over cycle graphs. Preliminary numerical experiments for a transportation network problem and a support vector machine model are presented.

Summary

We haven't generated a summary for this paper yet.