Papers
Topics
Authors
Recent
Search
2000 character limit reached

Relaxed Indexability and Index Policy for Partially Observable Restless Bandits

Published 26 Jul 2021 in math.OC | (2107.11939v4)

Abstract: This paper addresses an important class of restless multi-armed bandit (RMAB) problems that finds broad application in operations research, stochastic optimization, and reinforcement learning. There are $N$ independent Markov processes that may be operated, observed and offer rewards. Due to the resource constraint, we can only choose a subset of $M~(M<N)$ processes to operate and accrue reward determined by the states of selected processes. We formulate the problem as a partially observable RMAB with an infinite state space and design an algorithm that achieves a near-optimal performance with low complexity. Our algorithm is based on a generalization of Whittle's original idea of indexability. Referred to as the relaxed indexability, the extended definition leads to the efficient online verifications and computations of the approximate Whittle index under the proposed algorithmic framework.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.