Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Parallel Stochastic Approximation Method for Nonconvex Multi-Agent Optimization Problems (1410.5076v2)

Published 19 Oct 2014 in cs.MA and math.OC

Abstract: Consider the problem of minimizing the expected value of a (possibly nonconvex) cost function parameterized by a random (vector) variable, when the expectation cannot be computed accurately (e.g., because the statistics of the random variables are unknown and/or the computational complexity is prohibitive). Classical sample stochastic gradient methods for solving this problem may empirically suffer from slow convergence. In this paper, we propose for the first time a stochastic parallel Successive Convex Approximation-based (best-response) algorithmic framework for general nonconvex stochastic sum-utility optimization problems, which arise naturally in the design of multi-agent systems. The proposed novel decomposition enables all users to update their optimization variables in parallel by solving a sequence of strongly convex subproblems, one for each user. Almost surely convergence to stationary points is proved. We then customize our algorithmic framework to solve the stochastic sum rate maximization problem over Single-Input-Single-Output (SISO) frequency-selective interference channels, multiple-input-multiple-output (MIMO) interference channels, and MIMO multiple-access channels. Numerical results show that our algorithms are much faster than state-of-the-art stochastic gradient schemes while achieving the same (or better) sum-rates.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yang Yang (884 papers)
  2. Gesualdo Scutari (62 papers)
  3. Daniel P. Palomar (61 papers)
  4. Marius Pesavento (45 papers)
Citations (17)

Summary

We haven't generated a summary for this paper yet.