Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Time and Space Efficient Algorithm for Contextual Linear Bandits (1207.3024v4)

Published 12 Jul 2012 in cs.DS and cs.GT

Abstract: We consider a multi-armed bandit problem where payoffs are a linear function of an observed stochastic contextual variable. In the scenario where there exists a gap between optimal and suboptimal rewards, several algorithms have been proposed that achieve $O(\log T)$ regret after $T$ time steps. However, proposed methods either have a computation complexity per iteration that scales linearly with $T$ or achieve regrets that grow linearly with the number of contexts $|\myset{X}|$. We propose an $\epsilon$-greedy type of algorithm that solves both limitations. In particular, when contexts are variables in $\realsd$, we prove that our algorithm has a constant computation complexity per iteration of $O(poly(d))$ and can achieve a regret of $O(poly(d) \log T)$ even when $|\myset{X}| = \Omega (2d) $. In addition, unlike previous algorithms, its space complexity scales like $O(Kd2)$ and does not grow with $T$.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. José Bento (29 papers)
  2. Stratis Ioannidis (67 papers)
  3. S. Muthukrishnan (51 papers)
  4. Jinyun Yan (8 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.