Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Power Allocation in Cache-Aided NOMA Systems: Optimization and Deep Reinforcement Learning Approaches (1909.11074v1)

Published 24 Sep 2019 in cs.IT, cs.LG, cs.NI, and math.IT

Abstract: This work exploits the advantages of two prominent techniques in future communication networks, namely caching and non-orthogonal multiple access (NOMA). Particularly, a system with Rayleigh fading channels and cache-enabled users is analyzed. It is shown that the caching-NOMA combination provides a new opportunity of cache hit which enhances the cache utility as well as the effectiveness of NOMA. Importantly, this comes without requiring users' collaboration, and thus, avoids many complicated issues such as users' privacy and security, selfishness, etc. In order to optimize users' quality of service and, concurrently, ensure the fairness among users, the probability that all users can decode the desired signals is maximized. In NOMA, a combination of multiple messages are sent to users, and the defined objective is approached by finding an appropriate power allocation for message signals. To address the power allocation problem, two novel methods are proposed. The first one is a divide-and-conquer-based method for which closed-form expressions for the optimal resource allocation policy are derived, making this method simple and flexible to the system context. The second one is based on the deep reinforcement learning method that allows all users to share the full bandwidth. Finally, simulation results are provided to demonstrate the effectiveness of the proposed methods and to compare their performance.

Citations (61)

Summary

We haven't generated a summary for this paper yet.