Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Low-Complexity Algorithm for Restless Bandits with Imperfect Observations (2108.03812v3)

Published 9 Aug 2021 in cs.LG and math.OC

Abstract: We consider a class of restless bandit problems that finds a broad application area in reinforcement learning and stochastic optimization. We consider $N$ independent discrete-time Markov processes, each of which had two possible states: 1 and 0 (good' andbad'). Only if a process is both in state 1 and observed to be so does reward accrue. The aim is to maximize the expected discounted sum of returns over the infinite horizon subject to a constraint that only $M$ $(<N)$ processes may be observed at each step. Observation is error-prone: there are known probabilities that state 1 (0) will be observed as 0 (1). From this one knows, at any time $t$, a probability that process $i$ is in state 1. The resulting system may be modeled as a restless multi-armed bandit problem with an information state space of uncountable cardinality. Restless bandit problems with even finite state spaces are PSPACE-HARD in general. We propose a novel approach for simplifying the dynamic programming equations of this class of restless bandits and develop a low-complexity algorithm that achieves a strong performance and is readily extensible to the general restless bandit model with observation errors. Under certain conditions, we establish the existence (indexability) of Whittle index and its equivalence to our algorithm. When those conditions do not hold, we show by numerical experiments the near-optimal performance of our algorithm in the general parametric space. Furthermore, we theoretically prove the optimality of our algorithm for homogeneous systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (11)
  1. Brown DB, Simth JE (2020) Index policies and performance bounds for dynamic selection problems. Manage Sci 66(7):3029–3050.
  2. Gittins JC (1979) Bandit processes and dynamic allocation indices. J R Stat Soc 41(2):148–177.
  3. Heinonen J (2005) Lectures on Lipschitz analysis. www.math.jyu.fi/research/reports/rep100.pdf.
  4. Levy BC (2008) Principles of Signal Detection and Parameter Estimation. Springer, Verlag.
  5. Liu K (2021) Index policy for a class of partially observable Markov decision processes. https://arxiv.org/abs/2107.11939.
  6. Liu K, Zhao Q (2010) Indexability of restless bandit problems and optimality of Whittle index for dynamic multichannel access. IEEE Trans Inf Theory 56(11):5547–5567.
  7. Papadimitriou CH, Tsitsiklis JN (1999) The complexity of optimal queueing network control. Math Oper Res 24(2):293–305.
  8. Sondik EJ (1978) The optimal control of partially observable Markov processes over the infinite horizon: discounted costs. Oper Res 26(2):282–304.
  9. Weber RR, Weiss G (1990) On an index policy for restless bandits. J Appl Probab 27:637–648.
  10. Weber RR, Weiss G (1991) Addendum to ‘On an index policy for restless bandits’. Adv Appl Prob 23:429–430.
  11. Whittle P (1988) Restless bandits: Activity allocation in a changing world. J Appl Probab 25:287–298.
Citations (4)

Summary

We haven't generated a summary for this paper yet.