Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fault Detection in Telecom Networks using Bi-level Federated Graph Neural Networks (2311.14469v1)

Published 24 Nov 2023 in cs.LG and cs.NI

Abstract: 5G and Beyond Networks become increasingly complex and heterogeneous, with diversified and high requirements from a wide variety of emerging applications. The complexity and diversity of Telecom networks place an increasing strain on maintenance and operation efforts. Moreover, the strict security and privacy requirements present a challenge for mobile operators to leverage network data. To detect network faults, and mitigate future failures, prior work focused on leveraging traditional ML/DL methods to locate anomalies in networks. The current approaches, although powerful, do not consider the intertwined nature of embedded and software-intensive Radio Access Network systems. In this paper, we propose a Bi-level Federated Graph Neural Network anomaly detection and diagnosis model that is able to detect anomalies in Telecom networks in a privacy-preserving manner, while minimizing communication costs. Our method revolves around conceptualizing Telecom data as a bi-level temporal Graph Neural Networks. The first graph captures the interactions between different RAN nodes that are exposed to different deployment scenarios in the network, while each individual Radio Access Network node is further elaborated into its software (SW) execution graph. Additionally, we use Federated Learning to address privacy and security limitations. Furthermore, we study the performance of anomaly detection model under three settings: (1) Centralized (2) Federated Learning and (3) Personalized Federated Learning using real-world data from an operational network. Our comprehensive experiments showed that Personalized Federated Temporal Graph Neural Networks method outperforms the most commonly used techniques for Anomaly Detection.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (31)
  1. Ericsson mobility report, 2023. [Online]. Available: https://www.ericsson.com/en/reports-and-papers/mobility-report/reports/june-2023
  2. Data ingestion architecture for telecom applications. [Online]. Available: https://www.ericsson.com/en/reports-and-papers/ericsson-technology-review/articles/data-ingestion-architecture-for-telecom
  3. P. M. Mammen, “Federated learning: Opportunities and challenges,” 1 2021.
  4. X. Lin, “Artificial intelligence in 3gpp 5g-advanced: A survey,” arXiv preprint arXiv:2305.05092, 2023.
  5. R. Chalapathy and S. Chawla, “Deep learning for anomaly detection: A survey,” arXiv preprint arXiv:1901.03407, 2019.
  6. A. Blázquez-García, A. Conde, U. Mori, and J. A. Lozano, “A review on outlier/anomaly detection in time series data,” 2 2020. [Online]. Available: http://arxiv.org/abs/2002.04236
  7. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” nature, vol. 323, no. 6088, pp. 533–536, 1986.
  8. K. Cho, B. Van Merriënboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014.
  9. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  10. E. I. Altman, “Financial ratios, discriminant analysis and the prediction of corporate bankruptcy,” The journal of finance, vol. 23, no. 4, pp. 589–609, 1968.
  11. W. H. Beaver, “Financial ratios as predictors of failure,” Journal of accounting research, pp. 71–111, 1966.
  12. S. Mehrang, E. Helander, M. Pavel, A. Chieh, and I. Korhonen, “Outlier detection in weight time series of connected scales,” in 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).   IEEE, 2015, pp. 1489–1496.
  13. J. Hochenbaum, O. S. Vallis, and A. Kejariwal, “Automatic anomaly detection in the cloud via statistical learning,” arXiv preprint arXiv:1704.07706, 2017.
  14. Y. Seo, M. Defferrard, P. Vandergheynst, and X. Bresson, “Structured sequence modeling with graph convolutional recurrent networks,” 12 2016, gCRN or GCLTSM. [Online]. Available: http://arxiv.org/abs/1612.07659
  15. ——, “Structured sequence modeling with graph convolutional recurrent networks,” in Neural Information Processing: 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, December 13-16, 2018, Proceedings, Part I 25.   Springer, 2018, pp. 362–373.
  16. J. Zhu, Y. Song, L. Zhao, and H. Li, “A3t-gcn: Attention temporal graph convolutional network for traffic forecasting,” 6 2020, a3GCN. [Online]. Available: http://arxiv.org/abs/2006.11583
  17. Y. Shen, J. Zhang, S. Song, and K. B. Letaief, “Graph neural networks for wireless communications: From theory to practice,” IEEE Transactions on Wireless Communications, 2022.
  18. C.-C. Yen, W. Sun, H. Purmehdi, W. Park, K. R. Deshmukh, N. Thakrar, O. Nassef, and A. Jacobs, “Graph neural network based root cause analysis using multivariate time-series kpis for wireless networks,” in NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium.   IEEE, 2022, pp. 1–7.
  19. P. Yu, H. Zhang, X. Jiang, Y. Zhou, X. Yan, Q. Zeng, and Y. Lin, “Flam: Locating and mitigating 5gc network failure with knowledge graphs in china telecom’s network,” 2023.
  20. B. Qian, W. Wu, Y. Yang et al., “Cellular fault prediction of graphical representation based on spatio-temporal graph convolutional networks,” Wei and Yang, Yan, Cellular Fault Prediction of Graphical Representation Based on Spatio-Temporal Graph Convolutional Networks.
  21. J. Konečný, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” 10 2016, google founding paper. [Online]. Available: http://arxiv.org/abs/1610.02527
  22. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” 2 2016. [Online]. Available: http://arxiv.org/abs/1602.05629
  23. A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” 2 2021, problem of non IID. [Online]. Available: http://arxiv.org/abs/2103.00710http://dx.doi.org/10.1109/TNNLS.2022.3160699
  24. F. Chen, G. Long, Z. Wu, T. Zhou, and J. Jiang, “Personalized federated learning with graph,” 3 2022, bi-level FL with graphs. [Online]. Available: http://arxiv.org/abs/2203.00829
  25. P. Xing, S. Lu, L. Wu, and H. Yu, “Big-fed: Bilevel optimization enhanced graph-aided federated learning,” IEEE Transactions on Big Data, 2022.
  26. K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y.-J. A. Zhang, “The roadmap to 6g: Ai empowered wireless networks,” IEEE communications magazine, vol. 57, no. 8, pp. 84–90, 2019.
  27. M. Fey and J. E. Lenssen, “Fast graph representation learning with pytorch geometric,” arXiv preprint arXiv:1903.02428, 2019.
  28. C. Chaccour, W. Saad, M. Debbah, Z. Han, and H. V. Poor, “Less data, more knowledge: Building next generation semantic communication networks,” arXiv preprint arXiv:2211.14343, 2022.
  29. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
  30. Q. Wen, T. Zhou, C. Zhang, W. Chen, Z. Ma, J. Yan, and L. Sun, “Transformers in time series: A survey,” arXiv preprint arXiv:2202.07125, 2022.
  31. L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” Advances in neural information processing systems, vol. 32, 2019.
Citations (4)

Summary

We haven't generated a summary for this paper yet.