Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Online Learning with Feedback Graphs: The True Shape of Regret (2306.02971v1)

Published 5 Jun 2023 in cs.LG, cs.IT, math.IT, math.ST, and stat.TH

Abstract: Sequential learning with feedback graphs is a natural extension of the multi-armed bandit problem where the problem is equipped with an underlying graph structure that provides additional information - playing an action reveals the losses of all the neighbors of the action. This problem was introduced by \citet{mannor2011} and received considerable attention in recent years. It is generally stated in the literature that the minimax regret rate for this problem is of order $\sqrt{\alpha T}$, where $\alpha$ is the independence number of the graph, and $T$ is the time horizon. However, this is proven only when the number of rounds $T$ is larger than $\alpha3$, which poses a significant restriction for the usability of this result in large graphs. In this paper, we define a new quantity $R*$, called the \emph{problem complexity}, and prove that the minimax regret is proportional to $R*$ for any graph and time horizon $T$. Introducing an intricate exploration strategy, we define the \mainAlgorithm algorithm that achieves the minimax optimal regret bound and becomes the first provably optimal algorithm for this setting, even if $T$ is smaller than $\alpha3$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tomáš Kocák (4 papers)
  2. Alexandra Carpentier (51 papers)
Citations (2)