Papers
Topics
Authors
Recent
Search
2000 character limit reached

Alternating proximal-gradient steps for (stochastic) nonconvex-concave minimax problems

Published 27 Jul 2020 in math.OC | (2007.13605v4)

Abstract: Minimax problems of the form $\min_x \max_y \Psi(x,y)$ have attracted increased interest largely due to advances in machine learning, in particular generative adversarial networks. These are typically trained using variants of stochastic gradient descent for the two players. Although convex-concave problems are well understood with many efficient solution methods to choose from, theoretical guarantees outside of this setting are sometimes lacking even for the simplest algorithms. In particular, this is the case for alternating gradient descent ascent, where the two agents take turns updating their strategies. To partially close this gap in the literature we prove a novel global convergence rate for the stochastic version of this method for finding a critical point of $g(\cdot) := \max_y \Psi(\cdot,y)$ in a setting which is not convex-concave.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.