Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

GNN-Based Beamforming for Sum-Rate Maximization in MU-MISO Networks (2311.03659v1)

Published 7 Nov 2023 in eess.SY and cs.SY

Abstract: The advantages of graph neural networks (GNNs) in leveraging the graph topology of wireless networks have drawn increasing attentions. This paper studies the GNN-based learning approach for the sum-rate maximization in multiple-user multiple-input single-output (MU-MISO) networks subject to the users' individual data rate requirements and the power budget of the base station. By modeling the MU-MISO network as a graph, a GNN-based architecture named CRGAT is proposed to directly map the channel state information to the beamforming vectors. The attention-enabled aggregation and the residual-assisted combination are adopted to enhance the learning capability and avoid the oversmoothing issue. Furthermore, a novel activation function is proposed for the constraint due to the limited power budget at the base station. The CRGAT is trained in an unsupervised learning manner with two proposed loss functions. An evaluation method is proposed for the learning-based approach, based on which the effectiveness of the proposed CRGAT is validated in comparison with several convex optimization and learning based approaches. Numerical results are provided to reveal the advantages of the CRGAT including the millisecond-level response with limited optimality performance loss, the scalability to different number of users and power budgets, and the adaptability to different system settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yuhang Li (102 papers)
  2. Yang Lu (158 papers)
  3. Bo Ai (230 papers)
  4. Octavia A. Dobre (187 papers)
  5. Zhiguo Ding (260 papers)
  6. Dusit Niyato (671 papers)
Citations (14)

Summary

We haven't generated a summary for this paper yet.