Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online and Bandit Algorithms Beyond $\ell_p$ Norms (2210.13755v1)

Published 25 Oct 2022 in cs.DS

Abstract: Vector norms play a fundamental role in computer science and optimization, so there is an ongoing effort to generalize existing algorithms to settings beyond $\ell_\infty$ and $\ell_p$ norms. We show that many online and bandit applications for general norms admit good algorithms as long as the norm can be approximated by a function that is gradient-stable'', a notion that we introduce. Roughly it says that the gradient of the function should not drastically decrease (multiplicatively) in any component as we increase the input vector. We prove that several families of norms, including all monotone symmetric norms, admit a gradient-stable approximation, giving us the first online and bandit algorithms for these norm families. In particular, our notion of gradient-stability gives $O\big(\log^2 (\text{dimension})\big)$-competitive algorithms for the symmetric norm generalizations of Online Generalized Load Balancing and Bandits with Knapsacks. Our techniques extend to applications beyond symmetric norms as well, e.g., to Online Vector Scheduling and to Online Generalized Assignment with Convex Costs. Some key properties underlying our applications that are implied by gradient-stable approximations are asmooth game inequality'' and an approximate converse to Jensen's inequality.

Summary

We haven't generated a summary for this paper yet.