Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning for Finite-Horizon Restless Multi-Armed Multi-Action Bandits (2109.09855v2)

Published 20 Sep 2021 in cs.LG, math.OC, and stat.ML

Abstract: We study a finite-horizon restless multi-armed bandit problem with multiple actions, dubbed R(MA)2B. The state of each arm evolves according to a controlled Markov decision process (MDP), and the reward of pulling an arm depends on both the current state of the corresponding MDP and the action taken. The goal is to sequentially choose actions for arms so as to maximize the expected value of the cumulative rewards collected. Since finding the optimal policy is typically intractable, we propose a computationally appealing index policy which we call Occupancy-Measured-Reward Index Policy. Our policy is well-defined even if the underlying MDPs are not indexable. We prove that it is asymptotically optimal when the activation budget and number of arms are scaled up, while keeping their ratio as a constant. For the case when the system parameters are unknown, we develop a learning algorithm. Our learning algorithm uses the principle of optimism in the face of uncertainty and further uses a generative model in order to fully exploit the structure of Occupancy-Measured-Reward Index Policy. We call it the R(MA)2B-UCB algorithm. As compared with the existing algorithms, R(MA)2B-UCB performs close to an offline optimum policy, and also achieves a sub-linear regret with a low computational complexity. Experimental results show that R(MA)2B-UCB outperforms the existing algorithms in both regret and run time.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Guojun Xiong (27 papers)
  2. Jian Li (667 papers)
  3. Rahul Singh (141 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.