Papers
Topics
Authors
Recent
2000 character limit reached

Approximate Nash Equilibrium via Inexact ADMM

Updated 19 November 2025
  • The paper presents a distributed inexact-ADMM method to compute ε-approximate Nash equilibria in convex and strongly monotone games.
  • It leverages consensus constraints and proximal updates, ensuring convergence with an O(1/k) residual decay under limited information exchange.
  • The algorithm’s tuning of penalty parameters balances convergence speed and stability, with empirical validation in networked scenarios like wireless congestion control.

An approximate Nash equilibrium seeking algorithm is a computational procedure designed to identify an action profile or strategy set for multiple agents in a noncooperative game, such that no agent can achieve more than a specified ε improvement in their cost or utility by deviating unilaterally. Rigorous development of such algorithms is central to multi-agent learning, distributed optimization, and equilibrium computation for games characterized by convexity, continuity, monotonicity, and possibly large-scale communication graphs. Among the foundational approaches, distributed inexact-ADMM (Alternating Direction Method of Multipliers) provides a principled framework for convergence-guaranteed iterative computation of approximate Nash equilibria under restricted information and network constraints (Salehisadaghiani et al., 2016).

1. Precise Problem Formulation

Consider a game N={1,,N}\mathcal{N}=\{1,\ldots,N\} of NN players, each selecting xiXiRx_i\in X_i\subset\mathbb{R} (convex, compact), with joint action vector x=(xi,xi)X:=jXjx=(x_i,x_{-i})\in X:=\prod_j X_j, and cost function Ji(xi,xi)J_i(x_i,x_{-i}) for player ii. The Nash equilibrium xx^* is characterized by

Ji(xi,xi)Ji(xi,xi)xiXi, i,J_i(x_i^*,x_{-i}^*) \leq J_i(x_i,x_{-i}^*) \quad \forall x_i\in X_i,\ \forall i,

which is equivalently reformulated as a variational inequality (VI) involving the pseudo-gradient mapping

F(x):=(1J1(x),,NJN(x))T.F(x):=(\nabla_1 J_1(x),\ldots,\nabla_N J_N(x))^T.

A solution xx^* to

(F(x))T(xx)0xX(F(x^*))^T(x-x^*) \geq 0 \quad \forall x\in X

yields the Nash equilibrium. To enable distributed computation, each agent maintains a local copy xix^i and consensus constraints are imposed via a communication graph G=(N,E)G=(\mathcal{N},\mathcal{E}).

An ϵ\epsilon-approximate Nash equilibrium is xϵx^\epsilon such that: maxi[Ji(xiϵ,xiϵ)Ji(xi,xiϵ)]ϵ.\max_i\, \big[J_i(x^\epsilon_i, x^\epsilon_{-i}) - J_i(x_i^*, x^\epsilon_{-i})\big] \leq \epsilon.

2. Algorithmic Framework: Inexact-ADMM Approach

The distributed NE seeking problem under consensus constraints is framed as

min{xi}i=1N[Ji(xii,xii)+IXi(xii)] s.t.xixj=0, (i,j)E,\min_{ \{x^i\} } \sum_{i=1}^N \big[J_i(x_i^i, x_{-i}^i) + \mathbb{I}_{X_i}(x_i^i)\big] \ \quad\text{s.t.}\quad x^i-x^j=0,\ \forall(i,j)\in\mathcal{E},

with IXi\mathbb{I}_{X_i} the indicator function for XiX_i.

The edge-based augmented Lagrangian is: Lρ({xi},{λij})=i=1N[Ji(xii,xii)+IXi(xii)]+(i,j)E[(λij)T(xixj)+ρ2xixj2],\mathcal{L}_\rho(\{x^i\}, \{\lambda^{ij}\}) = \sum_{i=1}^N [J_i(x_i^i, x_{-i}^i) + \mathbb{I}_{X_i}(x_i^i)] + \sum_{(i,j)\in\mathcal{E}} [\left(\lambda^{ij}\right)^T(x^i-x^j) + \tfrac{\rho}{2}\|x^i-x^j\|^2], where λij\lambda^{ij} are dual variables and ρ>0\rho>0 is the penalty parameter.

ADMM update steps per player ii, per iteration kk:

  1. Primal update (x-step):

xik+1=argminxiXi[Ji(xi,xik)+(λik)T(xizik)+ρ2xizik2]x_i^{k+1} = \arg\min_{x_i\in X_i} \Big[J_i(x_i, x_{-i}^k) + (\lambda_i^k)^T(x_i-z_i^k) + \tfrac{\rho}{2}\|x_i-z_i^k\|^2 \Big]

  1. Consensus update (z-step):

zik+1=1NijNi(xjk+1+1ρλjk)z_i^{k+1} = \frac{1}{|\mathcal{N}_i|}\sum_{j\in\mathcal{N}_i} \left( x_j^{k+1} + \frac{1}{\rho}\lambda_j^k \right)

  1. Dual update (λ-step):

λik+1=λik+ρ(xik+1zik+1)\lambda_i^{k+1} = \lambda_i^k + \rho (x_i^{k+1} - z_i^{k+1})

Communication per step requires player ii to receive (xjk+1,λjk)(x_j^{k+1}, \lambda_j^k) from each neighbor jj.

3. Convergence Guarantees and Analysis

Assuming:

  • Nonempty, compact, convex XiX_i for all ii.
  • JiJ_i is C1C^1 in xix_i, convex in xix_i, joint continuity.
  • F(x)F(x) is LL-Lipschitz and σ\sigma-strongly monotone.
  • GG is connected.

Main convergence properties:

  • For penalty ρ>ρmin:=L22σcmin\rho>\rho_{\min} := \frac{L^2}{2\sigma} - c_{\min} (where cminc_{\min} is the smallest eigenvalue of the Laplacian of GG), the iterates converge to xx^*, with residuals

rk:=maxixikzik,sk:=ρzkzk1,r^k := \max_i \|x_i^k - z_i^k\|,\quad s^k := \rho\|z^k - z^{k-1}\|,

satisfying rk+sk=O(1/k)r^k+s^k = O(1/k).

Proof is via:

  • Proximal-ADMM firm nonexpansiveness.
  • Lyapunov function combining primal and dual errors:

Vk=xkx2+λkλ2,V_k = \|x^k - x^*\|^2 + \|\lambda^k - \lambda^*\|^2,

with Vk+1Vkα(xk+1xk2+λk+1λk2)V_{k+1} \le V_k - \alpha(\|x^{k+1}-x^k\|^2 + \|\lambda^{k+1}-\lambda^k\|^2) for α>0\alpha>0.

  • Telescoping argument leads to xkxx^k \to x^*, λkλ\lambda^k \to \lambda^*.

4. Approximation Error and Parameter Tuning

ϵ\epsilon-approximate Nash equilibrium: xϵx^\epsilon such that

maxi[Ji(xiϵ,xiϵ)Ji(xi,xiϵ)]ϵ.\max_i [J_i(x^\epsilon_i, x^\epsilon_{-i}) - J_i(x_i^*, x^\epsilon_{-i})] \le \epsilon.

After kk steps,

xkzk+zkzk1Cρk,\|x^k-z^k\| + \|z^k-z^{k-1}\| \le \frac{C}{\rho k},

implies, by the Lipschitz continuity of JiJ_i, suboptimality O(1/ρk)O(1/\rho k). To achieve ϵ\epsilon-accuracy, select kC/(ρϵ)k \ge C/(\rho\epsilon).

Penalty selection:

  • ρ\rho must fulfill: ρρmin=L22σc\rho \geq \rho_{\min} = \frac{L^2}{2\sigma} - c (ensuring strong convexity of penalized subproblems).
  • Practically, choose ρO(L)\rho \approx O(L) to trade-off convergence speed and numerical stability.

5. Practical Implementation and Complexity

Each iteration comprises local minimization and averaging over neighbors:

  • Communication per step: each node sends one vector xjx_j, one dual vector λj\lambda_j; total messages per iteration 2E\approx 2|E|.

Empirical results (e.g., ad-hoc wireless network congestion control with N=16N=16 nodes):

  • Convergence of xikx_i^k to xix_i^* observed within 200 iterations.
  • The ADMM-based method reaches 10310^{-3} accuracy in approximately 50 iterations, compared to 400 for a best-response gradient scheme.
  • Residual decays as O(1/k)O(1/k), consistent with theoretical prediction.

6. Structural, Spectral, and Communication Considerations

Convergence speed and approximation quality depend on:

  • Graph connectivity and Laplacian spectrum.
  • Degree of coupling in cost functions (condition number affects ρmin\rho_{\min}).
  • Local computation resources (solving convex minimizations per update).

A strong monotonicity of the pseudo-gradient and convexity of cost ensures global convergence under the specified penalties, and communication graph properties critically influence step-size and rate bounds.

7. Significance and Extensions

The distributed inexact-ADMM algorithm exemplifies a scalable, provably convergent method for approximate Nash equilibrium seeking in multi-agent convex games with limited information exchange. The O(1/k) residual decay and tunable accuracy via penalty parameters provide guarantees suitable for large-scale networks and real-time scenarios. This methodology has influenced subsequent work on consensus-based splitting, operator-theoretic distributed algorithms, and robust game-theoretic computation (Salehisadaghiani et al., 2016).

In summary, approximate Nash equilibrium seeking via inexact-ADMM leverages local linearizations, consensus averaging, and primal-dual residual control to yield distributed convergence at quantifiable rates, under standard convexity, monotonicity, and graph-connectivity assumptions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Approximate Nash Equilibrium Seeking Algorithm.