Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Competitive Gradient Optimization (2205.14232v1)

Published 27 May 2022 in math.OC and cs.LG

Abstract: We study the problem of convergence to a stationary point in zero-sum games. We propose competitive gradient optimization (CGO ), a gradient-based method that incorporates the interactions between the two players in zero-sum games for optimization updates. We provide continuous-time analysis of CGO and its convergence properties while showing that in the continuous limit, CGO predecessors degenerate to their gradient descent ascent (GDA) variants. We provide a rate of convergence to stationary points and further propose a generalized class of $\alpha$-coherent function for which we provide convergence analysis. We show that for strictly $\alpha$-coherent functions, our algorithm convergences to a saddle point. Moreover, we propose optimistic CGO (OCGO), an optimistic variant, for which we show convergence rate to saddle points in $\alpha$-coherent class of functions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Abhijeet Vyas (3 papers)
  2. Kamyar Azizzadenesheli (92 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.