Lazy Online Gradient Descent is Universal on Polytopes (2004.01739v2)
Abstract: We prove the familiar Lazy Online Gradient Descent algorithm is universal on polytope domains. That means it gets $O(1)$ pseudo-regret against i.i.d opponents, while simultaneously achieving the well-known $O(\sqrt N)$ worst-case regret bound. For comparison the bulk of the literature focuses on variants of the Hedge (exponential weights) algorithm on the simplex. These can in principle be lifted to general polytopes; however the process is computationally unfeasible for many important classes where the number of vertices grows quickly with the dimension. The lifting procedure also ignores any Euclidean bounds on the cost vectors, and can create extra factors of dimension in the pseudo-regret bound. Gradient Descent is simpler than the handful of purpose-built algorithms for polytopes in the literature, and works in a broader setting. In particular existing algorithms assume the optimiser is unique, while our bound allows for several optimal vertices.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.