Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-UAV Speed Control with Collision Avoidance and Handover-aware Cell Association: DRL with Action Branching (2307.13158v2)

Published 24 Jul 2023 in cs.LG, cs.RO, cs.SY, and eess.SY

Abstract: This paper presents a deep reinforcement learning solution for optimizing multi-UAV cell-association decisions and their moving velocity on a 3D aerial highway. The objective is to enhance transportation and communication performance, including collision avoidance, connectivity, and handovers. The problem is formulated as a Markov decision process (MDP) with UAVs' states defined by velocities and communication data rates. We propose a neural architecture with a shared decision module and multiple network branches, each dedicated to a specific action dimension in a 2D transportation-communication space. This design efficiently handles the multi-dimensional action space, allowing independence for individual action dimensions. We introduce two models, Branching Dueling Q-Network (BDQ) and Branching Dueling Double Deep Q-Network (Dueling DDQN), to demonstrate the approach. Simulation results show a significant improvement of 18.32% compared to existing benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. J. Yu, X. Liu, Y. Gao, C. Zhang, and W. Zhang, “Deep learning for channel tracking in IRS-assisted UAV communication systems,” IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 7711–7722, 2022.
  2. N. Cherif, “Cellular-connected UAV in next-generation wireless networks,” Ph.D. dissertation, Université d’Ottawa/University of Ottawa, 2022.
  3. Y. Chen, X. Lin, T. Khan, and M. Mozaffari, “Efficient drone mobility support using reinforcement learning,” in Proc. IEEE Wireless Commun. Network. Conf. (WCNC), 2020, pp. 1–6.
  4. N. Cherif, W. Jaafar, H. Yanikomeroglu, and A. Yongacoglu, “Disconnectivity-aware energy-efficient cargo-UAV trajectory planning with minimum handoffs,” in Proc. IEEE Int. Conf. Commun. (ICC), 2021, pp. 1–6.
  5. H. Shoaib and H. Tabassum, “Optimization of speed and network deployment for reliable V2I communication in the presence of handoffs and interference,” IEEE Wireless Commun. Lett., pp. 1–1, 2023.
  6. Z. Yan and H. Tabassum, “Reinforcement learning for joint V2I network selection and autonomous driving policies,” in Proc. IEEE Glob. Commun. Conf., 2022, pp. 1241–1246.
  7. 3GPP, “Study on enhanced LTE support for aerial vehicles (release 15), TR 36.777,” Jun. 2018.
  8. M. Alzenad, A. El-Keyi, and H. Yanikomeroglu, “3-D placement of an unmanned aerial vehicle base station for maximum coverage of users with different QoS requirements,” IEEE Wireless Commun. Lett., vol. 7, no. 1, pp. 38–41, Feb. 2017.
  9. E. Leurent, “An environment for autonomous driving decision-making,” https://github.com/eleurent/highway-env, 2018.
  10. A. Tavakoli, F. Pardo, and P. Kormushev, “Action branching architectures for deep reinforcement learning,” in AAAI Conf. AI, 2018, pp. 4131–4138.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zijiang Yan (8 papers)
  2. Wael Jaafar (35 papers)
  3. Bassant Selim (9 papers)
  4. Hina Tabassum (74 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.