Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization (1706.00090v3)

Published 31 May 2017 in stat.ML, cs.IT, cs.LG, and math.IT

Abstract: In this paper, we consider the problem of sequentially optimizing a black-box function $f$ based on noisy samples and bandit feedback. We assume that $f$ is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after $T$ rounds, and on the cumulative regret, measuring the sum of regrets over the $T$ chosen points. For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $\epsilon$ requires $T = \Omega\big(\frac{1}{\epsilon2} (\log\frac{1}{\epsilon}){d/2}\big)$, and the average cumulative regret is at least $\Omega\big( \sqrt{T(\log T){d/2}} \big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $2d+O(1)$ in both cases. For the Mat\'ern-$\nu$ kernel, we give analogous bounds of the form $\Omega\big( (\frac{1}{\epsilon}){2+d/\nu}\big)$ and $\Omega\big( T{\frac{\nu + d}{2\nu + d}} \big)$, and discuss the resulting gaps to the existing upper bounds.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jonathan Scarlett (104 papers)
  2. Ilijia Bogunovic (1 paper)
  3. Volkan Cevher (216 papers)
Citations (95)

Summary

We haven't generated a summary for this paper yet.