Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 165 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 41 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 124 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Beyond Minimax Optimality: A Subgame Perfect Gradient Method (2412.06731v2)

Published 9 Dec 2024 in math.OC

Abstract: The study of unconstrained convex optimization has historically been concerned with worst-case a priori convergence rates. The development of the Optimized Gradient Method (OGM), due to Drori and Teboulle, Kim and Fessler, marked a major milestone in this study, as OGM achieves the optimal worst-case convergence rate among all gradient-span first-order methods. However, this notion of worst-case optimality is relatively coarse and allows OGM to have worst-case performance even on instances where stronger convergence guarantees are possible. For example, OGM is known to converge at its worst-case rate even on the toy example $Lx2/$, where exact convergence in just two steps is possible. We introduce a notion of optimality which is stronger than minimax optimality that requires a method to give optimal dynamic guarantees that exploit any "non-adversarialness" in the first-order oracle's reported information. We then give an algorithm which achieves this stronger optimality notion: the Subgame Perfect Gradient Method (SPGM). SPGM is a refinement of OGM whose update rules and convergence guarantees are dynamically computed in response to first-order information seen during the algorithm's execution. From a game-theoretic viewpoint, OGM can be seen as one side of a Nash Equilibrium for the "minimization game" whereas SPGM can be seen as one side of a Subgame Perfect Equilibrium for the same game. We also show that SPGM can be implemented with minimal computational and storage overhead in each iteration and provide a Julia implementation.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com
Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 51 likes.

Upgrade to Pro to view all of the tweets about this paper: