Papers
Topics
Authors
Recent
Search
2000 character limit reached

Designing Heterogeneous GNNs with Desired Permutation Properties for Wireless Resource Allocation

Published 8 Mar 2022 in cs.LG, cs.SY, and eess.SY | (2203.03906v3)

Abstract: Graph neural networks (GNNs) have been designed for learning a variety of wireless policies, i.e., the mappings from environment parameters to decision variables, thanks to their superior performance, and the potential in enabling scalability and size generalizability. These merits are rooted in leveraging permutation prior, i.e., satisfying the permutation property of the policy to be learned (referred to as desired permutation property). Many wireless policies are with complicated permutation properties. To satisfy these properties, heterogeneous GNNs (HetGNNs) should be used to learn such policies. There are two critical factors that enable a HetGNN to satisfy a desired permutation property: constructing an appropriate heterogeneous graph and judiciously designing the architecture of the HetGNN. However, both the graph and the HetGNN are designed heuristically so far. In this paper, we strive to provide a systematic approach for the design to satisfy the desired permutation property. We first propose a method for constructing a graph for a policy, where the edges and their types are defined for the sake of satisfying complicated permutation properties. Then, we provide and prove three sufficient conditions to design a HetGNN such that it can satisfy the desired permutation property when learning over an appropriate graph. These conditions suggest a method of designing the HetGNN with desired permutation property by sharing the processing, combining, and pooling functions according to the types of vertices and edges of the graph. We take power allocation and hybrid precoding policies as examples for demonstrating how to apply the proposed methods and validating the impact of the permutation prior by simulations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. Z. Zhang, Y. Yang, M. Hua et al., “Proactive caching for vehicular multi-view 3D video streaming via deep reinforcement learning,” IEEE Trans. Commun., vol. 18, no. 5, pp. 2693–2706, 2019.
  2. J. Zhang, Y. Huang, J. Wang, and X. You, “Intelligent beam training for millimeter-wave communications via deep reinforcement learning,” IEEE GLOBECOM, 2019.
  3. D. Liu, J. Zhao, and C. Yang, “Energy-saving predictive video streaming with deep reinforcement learning,” IEEE GLOBECOM, 2019.
  4. A. Khan and R. Adve, “Centralized and distributed deep reinforcement learning methods for downlink sum-rate optimization,” IEEE Trans. Commun., vol. 19, no. 12, pp. 8410–8426, 2020.
  5. K. Feng, Q. Wang, X. Li et al., “Deep reinforcement learning based intelligent reflecting surface optimization for MISO communication systems,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 745–749, 2020.
  6. A. Kasgari, W. Saad, M. Mozaffari et al., “Experienced deep reinforcement learning with generative adversarial networks (GANs) for model-free ultra reliable low latency communications,” IEEE Trans. Commun., vol. 69, no. 2, pp. 884–899, 2021.
  7. M. Alsenwi, N. Tran, M. Bennis et al., “Intelligent resource slicing for eMBB and URLLC coexistence in 5G and beyond: A deep reinforcement learning based approach,” IEEE Trans. Commun., vol. 20, no. 7, pp. 4585–4600, 2021.
  8. S. Wang, T. Lv, W. Ni et al., “Joint resource management for MC-NOMA: A deep reinforcement learning approach,” IEEE Trans. Commun., vol. 20, no. 9, pp. 5672–5688, 2021.
  9. V. Zambaldi, D. Raposo, A. Santoro et al., “Relational deep reinforcement learning,” arXiv preprint, 2018. [Online]. Available: https://arxiv.org/pdf/1806.01830.pdf
  10. P. Battaglia, J. Hamrick, V. Bapst et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint, 2018. [Online]. Available: https://arxiv.org/pdf/1806.01261.pdf
  11. M. Eisen and A. Ribeiro, “Optimal wireless resource allocation with random edge graph neural networks,” IEEE Trans. Signal Process., vol. 68, no. 10, pp. 2977–2991, 2020.
  12. J. Guo and C. Yang, “Learning power allocation for multi-cell-multi-user systems with heterogeneous graph neural network,” IEEE Trans. Commun., vol. 21, no. 2, pp. 884–897, 2022.
  13. C. V. N. Index, “Global mobile data traffic forecast update, 2017–2022,” Cisco white paper, 2019.
  14. N. Bui and J. Widmer, “Data-driven evaluation of anticipatory networking in LTE networks,” IEEE Trans. Mobile Comput., vol. 17, no. 10, pp. 2252–2265, 2018.
  15. M. Lee, G. Yu, and G. Li, “Graph embedding-based wireless link scheduling with few training samples,” IEEE Trans. Commun., vol. 20, no. 4, pp. 2282–2294, 2020.
  16. Y. Shen, Y. Shi, J. Zhang et al., “Graph neural networks for scalable radio resource management: Architecture design and theoretical analysis,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 101–115, 2020.
  17. T. Jiang, H. Cheng, and W. Yu, “Learning to reflect and to beamform for intelligent reflecting surface with implicit channel estimation,” IEEE J. Sel. Areas Commun., vol. 39, no. 7, pp. 1931–1945, 2021.
  18. B. Zhao, J. Guo, and C. Yang, “Learning precoding policy: CNN or GNN?” IEEE WCNC, 2022.
  19. T. Chen, X. Zhang, M. You et al., “A GNN-based supervised learning framework for resource allocation in wireless IoT networks,” IEEE Internet Things J., vol. 9, no. 3, pp. 1712–1724, 2022.
  20. Z. Zhang, T. Jiang, and W. Yu, “Learning based user scheduling in reconfigurable intelligent surface assisted multiuser downlink,” IEEE J. Sel. Topics Signal Process., vol. 16, no. 5, pp. 1026–1039, 2022.
  21. V. Ranasinghe, N. Rajatheva, and M. Latva-aho, “Graph neural network based access point selection for cell-free massive MIMO systems,” IEEE GLOBECOM, 2021.
  22. X. Zhang, H. Zhao, J. Xiong et al., “Scalable power control/beamforming in heterogeneous wireless networks with graph neural networks,” IEEE GLOBECOM, 2021.
  23. K. Nakashima, S. Kamiya, K. Ohtsu et al., “Deep reinforcement learning-based channel allocation for wireless LANs with graph convolutional networks,” IEEE Access, vol. 8, pp. 31 823–31 834, 2020.
  24. O. Orhan, V. Swamy, T. Tetzlaff et al., “Connection management xAPP for O-RAN RIC: A graph neural network and reinforcement learning approach,” IEEE ICMLA, 2021.
  25. P. Sun, J. Lan, J. Li et al., “Combining deep reinforcement learning with graph neural networks for optimal VNF placement,” IEEE Commun. Lett., vol. 25, no. 1, pp. 176–180, 2021.
  26. C. Yang, Y. Xiao, Y. Zhang et al., “Heterogeneous network representation learning: A unified framework with survey and benchmark,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 10, pp. 4854–4873, 2020.
  27. J. Del Rosario and G. Fox, “Constant bit rate network transmission of variable bit rate continuous media in video-on-demand servers,” Multimed. Tools. Appl., vol. 2, pp. 215–232, 1996.
  28. S. Wang, S. Bi, and Y.-J. Zhang, “Deep reinforcement learning with communication transformer for adaptive live streaming in wireless edge networks,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 308–322, 2022.
  29. Q. Lan, B. Lv, R. Wang et al., “Adaptive video streaming for massive MIMO networks via approximate MDP and reinforcement learning,” IEEE Trans. Wireless Commun., vol. 19, no. 9, pp. 5716–5731, 2020.
  30. Z. Lu and G. Veciana, “Optimizing stored video delivery for wireless networks: The value of knowing the future,” IEEE Trans. Multimedia, vol. 21, no. 1, pp. 197–210, 2019.
  31. C. She and C. Yang, “Energy efficient resource allocation for hybrid services with future channel gains,” IEEE Trans. Green Commun. Netw., vol. 4, no. 1, pp. 165–179, 2020.
  32. R. Atawia, H. Hassanein, N. Abu et al., “Utilization of stochastic modeling for green predictive video delivery under network uncertainties,” IEEE Trans. on Green Commun. Netw., vol. 2, no. 2, pp. 556–569, 2018.
  33. D. Liu, J. Zhao, C. Yang et al., “Accelerating deep reinforcement learning with the aid of partial model: Energy-efficient predictive video streaming,” IEEE Trans. Wireless Commun., vol. 20, no. 6, pp. 3734–3748, 2021.
  34. K. Shen and W. Yu, “FPLinQ: A cooperative spectrum sharing strategy for device-to-device communications,” IEEE ISIT, 2017.
  35. S. Ravanbakhsh, J. Schneider, and B. Poczos, “Equivariance through parameter-sharing,” PLMR ICML, 2017.
  36. T. Lillicrap, J. Hunt, A. Pritzel et al., “Continuous control with deep reinforcement learning,” ICLR, 2015.
  37. D. Kirkpatrick, “Determining graph properties from matrix representations,” ACM STOC, 1974.
  38. H. Fathy, S. Bortoff, G. Copeland et al., “Nested optimization of an elevator and its gain-scheduled LQG controller,” ASME IMECE, 2002.
  39. G. Dalal, K. Dvijotham, M. Vecerik et al., “Safe exploration in continuous action spaces,” arXiv preprint, 2018. [Online]. Available: https://arxiv.org/pdf/1801.08757.pdf
  40. Z. Wu, S. Pan, F. Chen et al., “A comprehensive survey on graph neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 1, pp. 4–24, 2021.
  41. N. Aschenbruck, R. Ernst, E. Gerhards-Padilla et al., “Bonnmotion: A mobility scenario generation and analysis tool,” 2010.
  42. D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” ICLR, 2014.
  43. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” PLMR ICML, 2015.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.