Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learning Orthogonal Projections in Linear Bandits

Published 26 Jun 2019 in cs.LG and stat.ML | (1906.10981v3)

Abstract: In a linear stochastic bandit model, each arm is a vector in an Euclidean space and the observed return at each time step is an unknown linear function of the chosen arm at that time step. In this paper, we investigate the problem of learning the best arm in a linear stochastic bandit model, where each arm's expected reward is an unknown linear function of the projection of the arm onto a subspace. We call this the projection reward. Unlike the classical linear bandit problem in which the observed return corresponds to the reward, the projection reward at each time step is unobservable. Such a model is useful in recommendation applications where the observed return includes corruption by each individual's biases, which we wish to exclude in the learned model. In the case where there are finitely many arms, we develop a strategy to achieve $O(|\bbD|\log n)$ regret, where $n$ is the number of time steps and $|\bbD|$ is the number of arms. In the case where each arm is chosen from an infinite compact set, our strategy achieves $O(n{2/3}(\log{n}){1/2})$ regret. Experiments verify the efficiency of our strategy.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.