Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified Approach to Translate Classical Bandit Algorithms to the Structured Bandit Setting (1810.08164v7)

Published 18 Oct 2018 in stat.ML and cs.LG

Abstract: We consider a finite-armed structured bandit problem in which mean rewards of different arms are known functions of a common hidden parameter $\theta*$. Since we do not place any restrictions of these functions, the problem setting subsumes several previously studied frameworks that assume linear or invertible reward functions. We propose a novel approach to gradually estimate the hidden $\theta*$ and use the estimate together with the mean reward functions to substantially reduce exploration of sub-optimal arms. This approach enables us to fundamentally generalize any classic bandit algorithm including UCB and Thompson Sampling to the structured bandit setting. We prove via regret analysis that our proposed UCB-C algorithm (structured bandit versions of UCB) pulls only a subset of the sub-optimal arms $O(\log T)$ times while the other sub-optimal arms (referred to as non-competitive arms) are pulled $O(1)$ times. As a result, in cases where all sub-optimal arms are non-competitive, which can happen in many practical scenarios, the proposed algorithms achieve bounded regret. We also conduct simulations on the Movielens recommendations dataset to demonstrate the improvement of the proposed algorithms over existing structured bandit algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Samarth Gupta (12 papers)
  2. Shreyas Chaudhari (19 papers)
  3. Subhojyoti Mukherjee (21 papers)
  4. Gauri Joshi (73 papers)
  5. Osman Yağan (38 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com