Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Density-Aware Reinforcement Learning to Optimise Energy Efficiency in UAV-Assisted Networks (2306.08785v1)

Published 14 Jun 2023 in cs.NI, cs.DC, cs.LG, and cs.MA

Abstract: Unmanned aerial vehicles (UAVs) serving as aerial base stations can be deployed to provide wireless connectivity to mobile users, such as vehicles. However, the density of vehicles on roads often varies spatially and temporally primarily due to mobility and traffic situations in a geographical area, making it difficult to provide ubiquitous service. Moreover, as energy-constrained UAVs hover in the sky while serving mobile users, they may be faced with interference from nearby UAV cells or other access points sharing the same frequency band, thereby impacting the system's energy efficiency (EE). Recent multi-agent reinforcement learning (MARL) approaches applied to optimise the users' coverage worked well in reasonably even densities but might not perform as well in uneven users' distribution, i.e., in urban road networks with uneven concentration of vehicles. In this work, we propose a density-aware communication-enabled multi-agent decentralised double deep Q-network (DACEMAD-DDQN) approach that maximises the total system's EE by jointly optimising the trajectory of each UAV, the number of connected users, and the UAVs' energy consumption while keeping track of dense and uneven users' distribution. Our result outperforms state-of-the-art MARL approaches in terms of EE by as much as 65% - 85%.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. B. Omoniwa, B. Galkin and I. Dusparic, “Energy-aware optimization of UAV base stations placement via decentralized multi-agent Q-learning,” 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Jan. 2022, pp. 216-222.
  2. R. Marini, S. Park, O. Simeone and C. Buratti, “Continual Meta-Reinforcement Learning for UAV-Aided Vehicular Wireless Networks,” https://doi.org/10.48550/arxiv.2207.06131, 2022.
  3. C. H. Liu, X. Ma, X. Gao and J. Tang, “Distributed Energy-Efficient Multi-UAV Navigation for Long-Term Communication Coverage by Deep Reinforcement Learning,” IEEE Trans Mob Comput., vol. 19, no. 6, pp. 1274-1285, June 2020.
  4. B. Omoniwa, B. Galkin and I. Dusparic, “Optimizing Energy Efficiency in UAV-Assisted Networks Using Deep Reinforcement Learning,” IEEE Wirel. Commun., vol. 11, no. 8, pp. 1590-1594, Aug. 2022.
  5. B. Omoniwa, B. Galkin and I. Dusparic, “Communication-Enabled Multi-Agent Decentralised Deep Reinforcement Learning to Optimise Energy-Efficiency in UAV-Assisted Networks,” http://arxiv.org/abs/2210.00041, 2022.
  6. L. Yang, H. Yao, J. Wang, C. Jiang, A. Benslimane and Y. Liu, “Multi-UAV-Enabled Load-Balance Mobile-Edge Computing for IoT Networks,” IEEE Internet of Things Journal, vol. 7, no. 8, pp. 6898-6908, Aug. 2020.
  7. M. Mozaffari, W. Saad, M. Bennis and M. Debbah, “Mobile Unmanned Aerial Vehicles (UAVs) for Energy-Efficient Internet of Things Communications,” IEEE Transactions on Wireless Communications, vol. 16, no. 11, pp. 7574-7589, Nov. 2017.
  8. B. Omoniwa, M. Guériau and I. Dusparic, “An RL-based Approach to Improve Communication Performance and Energy Utilization in Fog-based IoT,” 2019 International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Barcelona, Spain, 2019, pp. 324-329.
  9. Boris Galkin, Babatunji Omoniwa and Ivana Dusparic, “Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency of Fixed-Wing UAV Cellular Access Points,” ICC 2022 - IEEE International Conference on Communications, 2022, pp. 1-6.
  10. M. Samir, D. Ebrahimi, C. Assi, S. Sharafeddine and A. Ghrayeb, “Leveraging UAVs for Coverage in Cell-Free Vehicular Networks: A Deep Reinforcement Learning Approach,” IEEE Trans Mob Comput., vol. 20, no. 9, pp. 2835-2847, 1 Sept. 2021.
  11. C. H. Liu, Z. Chen, J. Tang, J. Xu and C. Piao, “Energy-Efficient UAV Control for Effective and Fair Communication Coverage: A Deep Reinforcement Learning Approach,” IEEE J. Sel. Areas Commun., vol. 36, no. 9, pp. 2059-2070, Sept. 2018.
  12. https://www.3gpp.org/ftp/Specs/archive/32_series/32.511/
  13. Y. Zeng, J. Xu and R. Zhang, “Energy Minimization for Wireless Communication With Rotary-Wing UAV,” IEEE Transactions on Wireless Communications, vol. 18, no. 4, pp. 2329-2345, April 2019.
  14. J. Tan and W. Guan, “Resource allocation of fog radio access network based on deep reinforcement learning,” Engineering Reports, 4(5):e12497, 2022.
  15. T. Ming, “Multi-Agent Reinforcement Learning: Independent versus Cooperative Agents,” Proceedings of the Tenth International Conference on Machine Learning (ICML 1993), San Francisco, CA, USA, pp. 330–337.

Summary

We haven't generated a summary for this paper yet.