Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Contextual Recommendations and Low-Regret Cutting-Plane Algorithms (2106.04819v1)

Published 9 Jun 2021 in cs.LG, cs.DS, and math.OC

Abstract: We consider the following variant of contextual linear bandits motivated by routing applications in navigational engines and recommendation systems. We wish to learn a hidden $d$-dimensional value $w*$. Every round, we are presented with a subset $\mathcal{X}t \subseteq \mathbb{R}d$ of possible actions. If we choose (i.e. recommend to the user) action $x_t$, we obtain utility $\langle x_t, w* \rangle$ but only learn the identity of the best action $\arg\max{x \in \mathcal{X}_t} \langle x, w* \rangle$. We design algorithms for this problem which achieve regret $O(d\log T)$ and $\exp(O(d \log d))$. To accomplish this, we design novel cutting-plane algorithms with low "regret" -- the total distance between the true point $w*$ and the hyperplanes the separation oracle returns. We also consider the variant where we are allowed to provide a list of several recommendations. In this variant, we give an algorithm with $O(d2 \log d)$ regret and list size $\mathrm{poly}(d)$. Finally, we construct nearly tight algorithms for a weaker variant of this problem where the learner only learns the identity of an action that is better than the recommendation. Our results rely on new algorithmic techniques in convex geometry (including a variant of Steiner's formula for the centroid of a convex set) which may be of independent interest.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sreenivas Gollapudi (26 papers)
  2. Guru Guruganesh (23 papers)
  3. Kostas Kollias (15 papers)
  4. Pasin Manurangsi (127 papers)
  5. Renato Paes Leme (59 papers)
  6. Jon Schneider (50 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.