Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks (2403.09171v2)

Published 14 Mar 2024 in cs.LG and cs.AI

Abstract: Although Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes via various message-passing mechanisms, the performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data. As a prominent solution, Graph Augmentation Learning (GAL) has recently received increasing attention. Among prior GAL approaches, edge-dropping methods that randomly remove edges from a graph during training are effective techniques to improve the robustness of GNNs. However, randomly dropping edges often results in bypassing critical edges, consequently weakening the effectiveness of message passing. In this paper, we propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges, which can be flexibly incorporated into diverse GNN backbones. Employing an adversarial training framework, the edge predictor utilizes the line graph transformed from the original graph to estimate the edges to be dropped, which improves the interpretability of the edge-dropping method. The proposed ADEdgeDrop is optimized alternately by stochastic gradient descent and projected gradient descent. Comprehensive experiments on six graph benchmark datasets demonstrate that the proposed ADEdgeDrop outperforms state-of-the-art baselines across various GNN backbones, demonstrating improved generalization and robustness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (26)
  1. K. Guo, K. Zhou, X. Hu, Y. Li, Y. Chang, and X. Wang, “Orthogonal graph neural networks,” in Proceedings of the 36th AAAI Conference on Artificial Intelligence, pp. 3996–4004, 2022.
  2. Z. Chen, Z. Wu, S. Wang, and W. Guo, “Dual low-rank graph autoencoder for semantic and topological networks,” in Proceedings of the 37th AAAI Conference on Artificial Intelligence, vol. 37, pp. 4191–4198, 2023.
  3. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proceedings of the 5th International Conference on Learning Representations, 2017.
  4. W. L. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in Neural Information Processing Systems, vol. 30, pp. 1024–1034, 2017.
  5. P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph attention networks,” in Proceedings of the 6th International Conference on Learning Representations, 2018.
  6. F. Wu, A. H. S. Jr., T. Zhang, C. Fifty, T. Yu, and K. Q. Weinberger, “Simplifying graph convolutional networks,” in Proceedings of the 36th International Conference on Machine Learning, vol. 97, pp. 6861–6871, 2019.
  7. L. Yang, C. Chen, W. Li, B. Niu, J. Gu, C. Wang, D. He, Y. Guo, and X. Cao, “Self-supervised graph neural networks via diverse and interactive message passing,” in Proceedings of the 36th AAAI Conference on Artificial Intelligence, pp. 4327–4336, 2022.
  8. C. Mao and Y. Luo, “Improving graph representation learning with distribution preserving,” in Proceedings of IEEE International Conference on Data Mining, pp. 1095–1100, 2022.
  9. K. Ding, Z. Xu, H. Tong, and H. Liu, “Data augmentation for deep graph learning: A survey,” ACM SIGKDD Explorations Newsletter, vol. 24, no. 2, pp. 61–77, 2022.
  10. D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X. Sun, “Measuring and relieving the over-smoothing problem for graph neural networks from the topological view,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, vol. 34, pp. 3438–3445, 2020.
  11. S. Suresh, P. Li, C. Hao, and J. Neville, “Adversarial graph augmentation to improve graph contrastive learning,” in Advances in Neural Information Processing Systems, pp. 15920–15933, 2021.
  12. K. Kong, G. Li, M. Ding, Z. Wu, C. Zhu, B. Ghanem, G. Taylor, and T. Goldstein, “Robust optimization as data augmentation for large-scale graphs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 60–69, 2022.
  13. Y. Rong, W. Huang, T. Xu, and J. Huang, “Dropedge: Towards deep graph convolutional networks on node classification,” in Proceedings of the 8th International Conference on Learning Representations, 2020.
  14. Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” in Proceedings of the Web Conference 2021, pp. 2069–2080, 2021.
  15. M. S. Schlichtkrull, N. D. Cao, and I. Titov, “Interpreting graph neural networks for NLP with differentiable edge masking,” in Proceedings of the 9th International Conference on Learning Representations, 2021.
  16. H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in graph neural networks: A taxonomic survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 5, pp. 5782–5799, 2023.
  17. H. Zhu and P. Koniusz, “Simple spectral graph convolution,” in Proceedings of the 9th International Conference on Learning Representations, 2021.
  18. S. Peng, K. Sugiyama, and T. Mine, “Svd-gcn: A simplified graph convolution paradigm for recommendation,” in Proceedings of the 31st ACM International Conference on Information and Knowledge Management, pp. 1625–1634, 2022.
  19. S. Yu, H. Huang, M. N. Dao, and F. Xia, “Graph augmentation learning,” in Companion of the Web Conference 2022, pp. 1063–1072, 2022.
  20. W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 66–74, 2020.
  21. Z. Xu, B. Du, and H. Tong, “Graph sanitation with application to node classification,” in Proceedings of the ACM Web Conference 2022, pp. 1136–1147, 2022.
  22. T. Zhao, Y. Liu, L. Neves, O. J. Woodford, M. Jiang, and N. Shah, “Data augmentation for graph neural networks,” in Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 11015–11023, 2021.
  23. D. Luo, W. Cheng, W. Yu, B. Zong, J. Ni, H. Chen, and X. Zhang, “Learning to drop: Robust graph neural network via topological denoising,” in Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp. 779–787, 2021.
  24. W. Feng, J. Zhang, Y. Dong, Y. Han, H. Luan, Q. Xu, Q. Yang, E. Kharlamov, and J. Tang, “Graph random neural networks for semi-supervised learning on graphs,” in Advances in Neural Information Processing Systems, vol. 33, pp. 22092–22103, 2020.
  25. T. Fang, Z. Xiao, C. Wang, J. Xu, X. Yang, and Y. Yang, “Dropmessage: Unifying random dropping for graph neural networks,” in Proceedings of the 37th AAAI Conference on Artificial Intelligence, vol. 37, pp. 4267–4275, 2023.
  26. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in Proceedings of the 6th International Conference on Learning Representations, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Zhaoliang Chen (11 papers)
  2. Zhihao Wu (34 papers)
  3. Ylli Sadikaj (4 papers)
  4. Claudia Plant (29 papers)
  5. Hong-Ning Dai (33 papers)
  6. Shiping Wang (17 papers)
  7. Wenzhong Guo (23 papers)
  8. Yiu-Ming Cheung (40 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets