Tabular and Deep Reinforcement Learning for Gittins Index (2405.01157v1)
Abstract: In the realm of multi-arm bandit problems, the Gittins index policy is known to be optimal in maximizing the expected total discounted reward obtained from pulling the Markovian arms. In most realistic scenarios however, the Markovian state transition probabilities are unknown and therefore the Gittins indices cannot be computed. One can then resort to reinforcement learning (RL) algorithms that explore the state space to learn these indices while exploiting to maximize the reward collected. In this work, we propose tabular (QGI) and Deep RL (DGN) algorithms for learning the Gittins index that are based on the retirement formulation for the multi-arm bandit problem. When compared with existing RL algorithms that learn the Gittins index, our algorithms have a lower run time, require less storage space (small Q-table size in QGI and smaller replay buffer in DGN), and illustrate better empirical convergence to the Gittins index. This makes our algorithm well suited for problems with large state spaces and is a viable alternative to existing methods. As a key application, we demonstrate the use of our algorithms in minimizing the mean flowtime in a job scheduling problem when jobs are available in batches and have an unknown service time distribution. \
- J. Gittins, “A dynamic allocation index for the sequential design of experiments,” Progress in statistics, pp. 241–266, 1974.
- Z. Scully, “A new toolbox for scheduling theory,” ACM SIGMETRICS Performance Evaluation Review, vol. 50, no. 3, pp. 3–6, 2023.
- S. Aalto, U. Ayesta, and R. Righter, “On the gittins index in the M/G/1 queue,” Queueing Systems, vol. 63, pp. 437–458, 2009.
- Z. Scully, I. Grosof, and M. Harchol-Balter, “The gittins policy is nearly optimal in the M/G/k under extremely general conditions,” Proceedings of the ACM on Measurement and Analysis of Computing Systems, vol. 4, no. 3, pp. 1–29, 2020.
- S. Aalto and Z. Scully, “Minimizing the mean slowdown in the M/G/1 queue,” Queueing Systems, vol. 104, no. 3, pp. 187–210, 2023.
- J. C. Gittins, “Bandit processes and dynamic allocation indices,” Journal of the Royal Statistical Society Series B: Statistical Methodology, vol. 41, no. 2, pp. 148–164, 1979.
- J. N. Tsitsiklis, “A short proof of the gittins index theorem,” The Annals of Applied Probability, pp. 194–199, 1994.
- E. Frostig and G. Weiss, “Four proofs of Gittins’ multiarmed bandit theorem,” Annals of Operations Research, vol. 241, no. 1-2, pp. 127–165, 2016.
- P. Whittle, “Restless bandits: Activity allocation in a changing world,” Journal of applied probability, vol. 25, no. A, pp. 287–298, 1988.
- R. R. Weber and G. Weiss, “On an index policy for restless bandits,” Journal of applied probability, vol. 27, no. 3, pp. 637–648, 1990.
- N. Gast, B. Gaujal, and C. Yan, “Exponential asymptotic optimality of Whittle index policy,” Queueing Systems, pp. 1–44, 2023.
- M. O. Duff, “Q-learning for bandit problems,” in Machine Learning Proceedings 1995. Elsevier, 1995, pp. 209–217.
- K. E. Avrachenkov and V. S. Borkar, “Whittle index based Q-learning for restless bandits with average reward,” Automatica, vol. 139, p. 110186, 2022.
- F. Robledo, V. Borkar, U. Ayesta, and K. Avrachenkov, “Qwi: Q-learning with whittle index,” ACM SIGMETRICS Performance Evaluation Review, vol. 49, no. 2, pp. 47–50, 2022.
- F. Robledo, V. S. Borkar, U. Ayesta, and K. Avrachenkov, “Tabular and deep learning of Whittle index,” in EWRL 2022-15th European Workshop of Reinforcement Learning, 2022.
- K. Nakhleh, I. Hou et al., “Deeptop: Deep threshold-optimal policy for MDPs and RMABs,” Advances in Neural Information Processing Systems, vol. 35, pp. 28 734–28 746, 2022.
- H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 30, no. 1, 2016.
- P. Whittle, “Multi-armed bandits and the Gittins index,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 42, no. 2, pp. 143–149, 1980.
- J. Chakravorty and A. Mahajan, “Multi-armed bandits, gittins index, and its calculation,” Methods and applications of statistics in clinical trials: Planning, analysis, and inferential methods, vol. 2, pp. 416–435, 2014.
- T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2016.
- J. Abounadi, D. Bertsekas, and V. S. Borkar, “Learning algorithms for markov decision processes with average cost,” SIAM Journal on Control and Optimization, vol. 40, no. 3, pp. 681–698, 2001.