Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Achieving Exponential Asymptotic Optimality in Average-Reward Restless Bandits without Global Attractor Assumption (2405.17882v2)

Published 28 May 2024 in cs.LG, math.OC, and math.PR

Abstract: We consider the infinite-horizon average-reward restless bandit problem. We propose a novel \emph{two-set policy} that maintains two dynamic subsets of arms: one subset of arms has a nearly optimal state distribution and takes actions according to an Optimal Local Control routine; the other subset of arms is driven towards the optimal state distribution and gradually merged into the first subset. We show that our two-set policy is asymptotically optimal with an $O(\exp(-C N))$ optimality gap for an $N$-armed problem, under the mild assumptions of aperiodic-unichain, non-degeneracy, and local stability. Our policy is the first to achieve \emph{exponential asymptotic optimality} under the above set of easy-to-verify assumptions, whereas prior work either requires a strong \emph{global attractor} assumption or only achieves an $O(1/\sqrt{N})$ optimality gap. We further discuss obstacles in weakening the assumptions by demonstrating examples where exponential asymptotic optimality is not achievable when any of the three assumptions is violated. Notably, we prove a lower bound for a large class of locally unstable restless bandits, showing that local stability is particularly fundamental for exponential asymptotic optimality. Finally, we use simulations to demonstrate that the two-set policy outperforms previous policies on certain RB problems and performs competitively overall.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Index policies and performance bounds for dynamic selection problems. Management Science, 66(7):3029–3050, 2020.
  2. Exponential convergence rate for the asymptotic optimality of whittle index policy. arXiv:2012.09064 [cs.PF], 2020.
  3. Exponential asymptotic optimality of whittle index policy. Queueing Systems, 104(1):107–150, 2023.
  4. Linear program-based policies for restless bandits: Necessary and sufficient conditions for (exponentially fast) asymptotic optimality. Math. Oper. Res., 2023.
  5. Reoptimization nearly solves weakly coupled markov decision processes. arXiv preprint arXiv:2211.01961, 2024.
  6. Indexability is not enough for whittle: Improved, near-optimal algorithms for restless bandits. In Proc. Int. Conf. Autonomous Agents and Multiagent Syst., page 1294–1302, Richland, SC, 2023.
  7. Theory of linear programming. Linear inequalities and related systems, 38:53–97, 1956.
  8. An asymptotically optimal index policy for finite-horizon restless bandits. arXiv:1707.00205 [math.OC], 2017.
  9. Restless bandits with average reward: Breaking the uniform global attractor assumption. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 12810–12844. Curran Associates, Inc., 2023.
  10. Unichain and aperiodicity are sufficient for asymptotic optimality of average-reward restless bandits. arXiv preprint arXiv:2402.05689, 2024.
  11. José Niño-Mora. Markovian restless bandits and index policies: A review. Mathematics, 11(7), 2023.
  12. The complexity of optimal queuing network control. Math. Oper. Res., 24(2):293–305, 1999.
  13. Martin L Puterman. Markov decision processes: Discrete stochastic dynamic programming. John Wiley & Sons, 2005.
  14. I. M. Verloop. Asymptotically optimal priority policies for indexable and nonindexable restless bandits. Ann. Appl. Probab., 26(4):1947–1995, 2016.
  15. Peter Whittle. Restless bandits: activity allocation in a changing world. J. Appl. Probab., 25:287 – 298, 1988.
  16. On an index policy for restless bandits. J. Appl. Probab., 27(3):637–648, 1990.
  17. Chen Yan. An optimal-control approach to infinite-horizon restless bandits: Achieving asymptotic optimality with minimal assumptions. arXiv preprint arXiv:2403.11913, 2024.
  18. An asymptotically optimal heuristic for general nonstationary finite-horizon restless multi-armed, multi-action bandits. Advances in Applied Probability, 51:745–772, 2019.
  19. Restless bandits with many arms: Beating the central limit theorem. arXiv:2107.11911 [math.OC], 2021.
  20. Near-optimality for infinite-horizon restless bandits with many arms. arXiv:2203.15853 [cs.LG], 2022.
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com