Beyond Minimax Optimality: A Subgame Perfect Gradient Method (2412.06731v2)
Abstract: The study of unconstrained convex optimization has historically been concerned with worst-case a priori convergence rates. The development of the Optimized Gradient Method (OGM), due to Drori and Teboulle, Kim and Fessler, marked a major milestone in this study, as OGM achieves the optimal worst-case convergence rate among all gradient-span first-order methods. However, this notion of worst-case optimality is relatively coarse and allows OGM to have worst-case performance even on instances where stronger convergence guarantees are possible. For example, OGM is known to converge at its worst-case rate even on the toy example $Lx2/$, where exact convergence in just two steps is possible. We introduce a notion of optimality which is stronger than minimax optimality that requires a method to give optimal dynamic guarantees that exploit any "non-adversarialness" in the first-order oracle's reported information. We then give an algorithm which achieves this stronger optimality notion: the Subgame Perfect Gradient Method (SPGM). SPGM is a refinement of OGM whose update rules and convergence guarantees are dynamically computed in response to first-order information seen during the algorithm's execution. From a game-theoretic viewpoint, OGM can be seen as one side of a Nash Equilibrium for the "minimization game" whereas SPGM can be seen as one side of a Subgame Perfect Equilibrium for the same game. We also show that SPGM can be implemented with minimal computational and storage overhead in each iteration and provide a Julia implementation.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.