Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-Power Level $Q$-Learning Algorithm for Random Access in NOMA mMTC Systems (2301.05196v1)

Published 12 Jan 2023 in cs.NI, cs.LG, cs.SY, eess.SP, and eess.SY

Abstract: The massive machine-type communications (mMTC) service will be part of new services planned to integrate the fifth generation of wireless communication (B5G). In mMTC, thousands of devices sporadically access available resource blocks on the network. In this scenario, the massive random access (RA) problem arises when two or more devices collide when selecting the same resource block. There are several techniques to deal with this problem. One of them deploys $Q$-learning (QL), in which devices store in their $Q$-table the rewards sent by the central node that indicate the quality of the transmission performed. The device learns the best resource blocks to select and transmit to avoid collisions. We propose a multi-power level QL (MPL-QL) algorithm that uses non-orthogonal multiple access (NOMA) transmit scheme to generate transmission power diversity and allow {accommodate} more than one device in the same time-slot as long as the signal-to-interference-plus-noise ratio (SINR) exceeds a threshold value. The numerical results reveal that the best performance-complexity trade-off is obtained by using a {higher {number of} power levels, typically eight levels}. The proposed MPL-QL {can deliver} better throughput and lower latency compared to other recent QL-based algorithms found in the literature

Citations (3)

Summary

We haven't generated a summary for this paper yet.