Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sharper bounds for online learning of smooth functions of a single variable (2105.14648v1)

Published 30 May 2021 in cs.LG, cs.DM, and stat.ML

Abstract: We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions. Let $\mathcal{F}q$ be the class of absolutely continuous functions $f: [0, 1] \rightarrow \mathbb{R}$ with $||f'||_q \le 1$, and define $opt_p(\mathcal{F}_q)$ as the best possible bound on the worst-case sum of the $p{th}$ powers of the absolute prediction errors over any number of trials. Kimber and Long (Theoretical Computer Science, 1995) proved for $q \ge 2$ that $opt_p(\mathcal{F}_q) = 1$ when $p \ge 2$ and $opt_p(\mathcal{F}_q) = \infty$ when $p = 1$. For $1 < p < 2$ with $p = 1+\epsilon$, the only known bound was $opt_p(\mathcal{F}{q}) = O(\epsilon{-1})$ from the same paper. We show for all $\epsilon \in (0, 1)$ and $q \ge 2$ that $opt_{1+\epsilon}(\mathcal{F}q) = \Theta(\epsilon{-\frac{1}{2}})$, where the constants in the bound do not depend on $q$. We also show that $opt{1+\epsilon}(\mathcal{F}_{\infty}) = \Theta(\epsilon{-\frac{1}{2}})$.

Citations (1)

Summary

We haven't generated a summary for this paper yet.