Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Computational Power of Optimization in Online Learning (1504.02089v4)

Published 8 Apr 2015 in cs.LG and cs.GT

Abstract: We consider the fundamental problem of prediction with expert advice where the experts are "optimizable": there is a black-box optimization oracle that can be used to compute, in constant time, the leading expert in retrospect at any point in time. In this setting, we give a novel online algorithm that attains vanishing regret with respect to $N$ experts in total $\widetilde{O}(\sqrt{N})$ computation time. We also give a lower bound showing that this running time cannot be improved (up to log factors) in the oracle model, thereby exhibiting a quadratic speedup as compared to the standard, oracle-free setting where the required time for vanishing regret is $\widetilde{\Theta}(N)$. These results demonstrate an exponential gap between the power of optimization in online learning and its power in statistical learning: in the latter, an optimization oracle---i.e., an efficient empirical risk minimizer---allows to learn a finite hypothesis class of size $N$ in time $O(\log{N})$. We also study the implications of our results to learning in repeated zero-sum games, in a setting where the players have access to oracles that compute, in constant time, their best-response to any mixed strategy of their opponent. We show that the runtime required for approximating the minimax value of the game in this setting is $\widetilde{\Theta}(\sqrt{N})$, yielding again a quadratic improvement upon the oracle-free setting, where $\widetilde{\Theta}(N)$ is known to be tight.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Elad Hazan (106 papers)
  2. Tomer Koren (79 papers)
Citations (64)

Summary

We haven't generated a summary for this paper yet.