Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning Based Mode Selection and Resource Allocation for Cellular V2X Communications (2002.05485v1)

Published 13 Feb 2020 in cs.NI and eess.SP

Abstract: Cellular vehicle-to-everything (V2X) communication is crucial to support future diverse vehicular applications. However, for safety-critical applications, unstable vehicle-to-vehicle (V2V) links and high signalling overhead of centralized resource allocation approaches become bottlenecks. In this paper, we investigate a joint optimization problem of transmission mode selection and resource allocation for cellular V2X communications. In particular, the problem is formulated as a Markov decision process, and a deep reinforcement learning (DRL) based decentralized algorithm is proposed to maximize the sum capacity of vehicle-to-infrastructure users while meeting the latency and reliability requirements of V2V pairs. Moreover, considering training limitation of local DRL models, a two-timescale federated DRL algorithm is developed to help obtain robust model. Wherein, the graph theory based vehicle clustering algorithm is executed on a large timescale and in turn the federated learning algorithm is conducted on a small timescale. Simulation results show that the proposed DRL-based algorithm outperforms other decentralized baselines, and validate the superiority of the two-timescale federated DRL algorithm for newly activated V2V pairs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xinran Zhang (28 papers)
  2. Mugen Peng (82 papers)
  3. Shi Yan (32 papers)
  4. Yaohua Sun (17 papers)
Citations (159)

Summary

We haven't generated a summary for this paper yet.