Papers
Topics
Authors
Recent
2000 character limit reached

Lazy Online Gradient Descent is Universal on Polytopes (2004.01739v2)

Published 3 Apr 2020 in cs.LG and stat.ML

Abstract: We prove the familiar Lazy Online Gradient Descent algorithm is universal on polytope domains. That means it gets $O(1)$ pseudo-regret against i.i.d opponents, while simultaneously achieving the well-known $O(\sqrt N)$ worst-case regret bound. For comparison the bulk of the literature focuses on variants of the Hedge (exponential weights) algorithm on the simplex. These can in principle be lifted to general polytopes; however the process is computationally unfeasible for many important classes where the number of vertices grows quickly with the dimension. The lifting procedure also ignores any Euclidean bounds on the cost vectors, and can create extra factors of dimension in the pseudo-regret bound. Gradient Descent is simpler than the handful of purpose-built algorithms for polytopes in the literature, and works in a broader setting. In particular existing algorithms assume the optimiser is unique, while our bound allows for several optimal vertices.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.