Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition (1207.3012v2)

Published 12 Jul 2012 in cs.LG and stat.ML

Abstract: We focus on the problem of minimizing a convex function $f$ over a convex set $S$ given $T$ queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer $x*_{f,S}$, as quantified by a Tsybakov-like noise condition. Specifically, we prove that if $f$ grows at least as fast as $|x-x*_{f,S}|\kappa$ around its minimum, for some $\kappa > 1$, then the optimal rate of learning $f(x*_{f,S})$ is $\Theta(T{-\frac{\kappa}{2\kappa-2}})$. The classic rate $\Theta(1/\sqrt T)$ for convex functions and $\Theta(1/T)$ for strongly convex functions are special cases of our result for $\kappa \rightarrow \infty$ and $\kappa=2$, and even faster rates are attained for $\kappa <2$. We also derive tight bounds for the complexity of learning $x_{f,S}*$, where the optimal rate is $\Theta(T{-\frac{1}{2\kappa-2}})$. Interestingly, these precise rates for convex optimization also characterize the complexity of active learning and our results further strengthen the connections between the two fields, both of which rely on feedback-driven queries.

Citations (31)

Summary

We haven't generated a summary for this paper yet.