Papers
Topics
Authors
Recent
2000 character limit reached

Q-greedyUCB: a New Exploration Policy for Adaptive and Resource-efficient Scheduling

Published 10 Jun 2020 in eess.SY and cs.SY | (2006.05902v1)

Abstract: This paper proposes a learning algorithm to find a scheduling policy that achieves an optimal delay-power trade-off in communication systems. Reinforcement learning (RL) is used to minimize the expected latency for a given energy constraint where the environments such as traffic arrival rates or channel conditions can change over time. For this purpose, this problem is formulated as an infinite-horizon Markov Decision Process (MDP) with constraints. To handle the constrained optimization problem, we adopt the Lagrangian relaxation technique to solve it. Then, we propose a variant of Q-learning, Q-greedyUCB that combines Q-learning for \emph{average} reward algorithm and Upper Confidence Bound (UCB) policy to solve this decision-making problem. We prove that the Q-greedyUCB algorithm is convergent through mathematical analysis. Simulation results show that Q-greedyUCB finds an optimal scheduling strategy, and is more efficient than Q-learning with the $\varepsilon$-greedy and Average-payoff RL algorithm in terms of the cumulative reward (i.e., the weighted sum of delay and energy) and the convergence speed. We also show that our algorithm can reduce the regret by up to 12% compared to the Q-learning with the $\varepsilon$-greedy and Average-payoff RL algorithm.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.