Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Applications of Deep Reinforcement Learning in Communications and Networking: A Survey (1810.07862v1)

Published 18 Oct 2018 in cs.NI and cs.LG

Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Nguyen Cong Luong (37 papers)
  2. Dinh Thai Hoang (125 papers)
  3. Shimin Gong (32 papers)
  4. Dusit Niyato (671 papers)
  5. Ping Wang (289 papers)
  6. Ying-Chang Liang (117 papers)
  7. Dong In Kim (168 papers)
Citations (1,346)

Summary

Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

The paper "Applications of Deep Reinforcement Learning in Communications and Networking: A Survey" comprehensively reviews how Deep Reinforcement Learning (DRL) can address various emerging issues in the communication and networking domain. The authors, Luong et al., present DRL as a transformative technology that combines Reinforcement Learning (RL) and Deep Learning (DL) to address the limitations of traditional RL methods when applied to complex and large-scale networks.

The survey delineates critical applications of DRL, categorizing them into dynamic network access, data rate control, wireless caching, data offloading, network security, connectivity preservation, traffic routing, and data collection.

Core Areas of Application

  1. Dynamic Network Access and Data Rate Control:
    • Dynamic Spectrum Access: DRL is effectively used to solve spectrum access problems in IoT and other dynamic environments. The surveys like \cite{wang2017deep} demonstrate that using DRL for channel selection can yield significant performance improvements in terms of throughput and latency compared to traditional methods. Adaptive DRL schemes can also re-train DNNs to adapt to ongoing network state changes, as noted in \cite{wang2018deeptrans}.
    • Joint User Association and Spectrum Access: DRL can optimize user association and spectrum allocation concurrently, as shown by works like \cite{nan2018deep}. These approaches leverage DDQN to manage the large state and action spaces typical of HetNets.
    • Adaptive Rate Control: In adaptive streaming scenarios as implemented in DASH, studies like \cite{mao2017neural} and \cite{gadaleta2017d} employ DRL to optimize video bitrate decisions, significantly enhancing QoE by dynamically adapting to network conditions.
  2. Wireless Caching and Data Offloading:
    • QoS-Aware Caching: DRL models, such as in \cite{zhong2017deep}, optimize caching by predicting content popularity and choosing which data to cache, facilitating notable improvements in cache hit rates.
    • Joint Caching and Transmission Control: Techniques like \cite{he2017cache} integrate caching decisions with transmission strategies to manage interference and optimize network throughput.
    • Computation Offloading: DRL enables the efficient allocation of computation resources across MEC servers, minimizing latency and energy consumption. Studies like \cite{chen2018performance} and \cite{wan2017reinforcement} highlight DRL's potential to enhance decision-making in dynamic networks with limited computational capacity.
  3. Network Security and Connectivity Preservation:
    • Jamming and Cyber-Physical Attacks: DRL provides a robust defense mechanism against jamming by dynamically adapting the transmission strategies. Examples include \cite{xiao2018user}, which uses DRL to adjust power allocation in UAV networks.
    • Autonomous Connectivity Preservation: DRL ensures connectivity in multi-UAV and autonomous vehicle networks by controlling UAV/vehicle actions based on the dynamically observed state. Research like \cite{huang2017deeppreserve} demonstrates effective DRL policies maintaining network connectivity.

Implications and Future Research Directions

The survey underscores the broad potential and transformational impact of DRL across various aspects of modern and future communication networks. However, challenges such as state determination in dense networks, obtaining accurate channel information under adversarial conditions, and the implementation of decentralized learning frameworks in highly dynamic environments require further exploration.

Future research directions highlighted by the authors include:

  • Exploring DRL for channel estimation in massive MIMO systems and WPT-enabled IoT services.
  • Using DRL for optimization in MCS, taking into account the dynamic nature of users' participation and incentives.
  • Cryptocurrency management and its associated learning strategies to maintain stable network economics.
  • Extending DRL frameworks to more effectively include auction mechanisms for resource allocation in highly heterogeneous environments.

In summary, DRL stands out as a pivotal methodology in modern network optimization, including the capability to learn and adapt to dynamic and complex environments, ultimately paving the way for more intelligent, autonomous, and efficient communication networks.