Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Optimizing the Long-Term Average Reward for Continuing MDPs: A Technical Report (2104.06139v2)

Published 13 Apr 2021 in cs.LG, cs.IT, cs.NI, and math.IT

Abstract: Recently, we have struck the balance between the information freshness, in terms of age of information (AoI), experienced by users and energy consumed by sensors, by appropriately activating sensors to update their current status in caching enabled Internet of Things (IoT) networks [1]. To solve this problem, we cast the corresponding status update procedure as a continuing Markov Decision Process (MDP) (i.e., without termination states), where the number of state-action pairs increases exponentially with respect to the number of considered sensors and users. Moreover, to circumvent the curse of dimensionality, we have established a methodology for designing deep reinforcement learning (DRL) algorithms to maximize (resp. minimize) the average reward (resp. cost), by integrating R-learning, a tabular reinforcement learning (RL) algorithm tailored for maximizing the long-term average reward, and traditional DRL algorithms, initially developed to optimize the discounted long-term cumulative reward rather than the average one. In this technical report, we would present detailed discussions on the technical contributions of this methodology.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Chao Xu (283 papers)
  2. Yiping Xie (18 papers)
  3. Xijun Wang (64 papers)
  4. Howard H. Yang (65 papers)
  5. Dusit Niyato (671 papers)
  6. Tony Q. S. Quek (237 papers)
Citations (2)