Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Subgraph Learning by Monitoring Early Training Representations (2403.09901v2)

Published 14 Mar 2024 in cs.LG and cs.CR

Abstract: Graph neural networks (GNNs) have attracted significant attention for their outstanding performance in graph learning and node classification tasks. However, their vulnerability to adversarial attacks, particularly through susceptible nodes, poses a challenge in decision-making. The need for robust graph summarization is evident in adversarial challenges resulting from the propagation of attacks throughout the entire graph. In this paper, we address both performance and adversarial robustness in graph input by introducing the novel technique SHERD (Subgraph Learning Hale through Early Training Representation Distances). SHERD leverages information from layers of a partially trained graph convolutional network (GCN) to detect susceptible nodes during adversarial attacks using standard distance metrics. The method identifies "vulnerable (bad)" nodes and removes such nodes to form a robust subgraph while maintaining node classification performance. Through our experiments, we demonstrate the increased performance of SHERD in enhancing robustness by comparing the network's performance on original and subgraph inputs against various baselines alongside existing adversarial attacks. Our experiments across multiple datasets, including citation datasets such as Cora, Citeseer, and Pubmed, as well as microanatomical tissue structures of cell graphs in the placenta, highlight that SHERD not only achieves substantial improvement in robust performance but also outperforms several baselines in terms of node classification accuracy and computational complexity.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. S. Benslimane, J. Azé, S. Bringay, M. Servajean, and C. Mollevi, “A text and gnn based controversy detection method on social media,” World Wide Web, vol. 26, no. 2, pp. 799–825, 2023.
  2. J. Wu, Y. Xiao, M. Lin, H. Cai, D. Zhao, Y. Li, H. Luo, C. Tang, and L. Wang, “Deepcancermap: A versatile deep learning platform for target-and cell-based anticancer drug discovery,” European Journal of Medicinal Chemistry, vol. 255, p. 115401, 2023.
  3. M. Munir, W. Avery, and R. Marculescu, “Mobilevig: Graph-based sparse attention for mobile vision applications,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2210–2218.
  4. C. Vanea, J. Campbell, O. Dodi, L. Salumäe, K. Meir, D. Hochner-Celnikier, H. Hochner, T. Laisk, L. M. Ernst, C. M. Lindgren, and C. Nellåker, “A new graph node classification benchmark: Learning structure from histology cell graphs,” 2022.
  5. D. Jung, S. Kim, W. H. Kim, and M. Cho, “Devil’s on the edges: Selective quad attention for scene graph generation,” 2023.
  6. C. Liu, X. Ma, Y. Zhan, L. Ding, D. Tao, B. Du, W. Hu, and D. P. Mandic, “Comprehensive graph gradual pruning for sparse training in graph neural networks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2023.
  7. S. Neshatfar, A. Magner, and S. Y. Sekeh, “Promise and limitations of supervised optimal transport-based graph summarization via information theoretic measures,” arXiv preprint arXiv:2305.07138, 2023.
  8. T. Wu, H. Ren, P. Li, and J. Leskovec, “Graph information bottleneck,” Advances in Neural Information Processing Systems, vol. 33, pp. 20 437–20 448, 2020.
  9. M. Waniek, T. P. Michalak, M. J. Wooldridge, and T. Rahwan, “Hiding individuals and communities in a social network,” Nature Human Behaviour, vol. 2, no. 2, pp. 139–147, 2018.
  10. A. Bojchevski and S. Günnemann, “Adversarial attacks on node embeddings via graph poisoning,” in International Conference on Machine Learning.   PMLR, 2019, pp. 695–704.
  11. ——, “Adversarial attacks on node embeddings via graph poisoning,” in International Conference on Machine Learning, 2018. [Online]. Available: https://api.semanticscholar.org/CorpusID:166227839
  12. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” ArXiv, vol. abs/1706.06083, 2017. [Online]. Available: https://api.semanticscholar.org/CorpusID:3488815
  13. N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank) defending against adversarial attacks on graphs,” in Proceedings of the 13th International Conference on Web Search and Data Mining, 2020, pp. 169–177.
  14. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples on graph data: Deep insights into attack and defense,” arXiv preprint arXiv:1903.01610, 2019.
  15. C. Zheng, B. Zong, W. Cheng, D. Song, J. Ni, W. Yu, H. Chen, and W. Wang, “Robust graph representation learning via neural sparsification,” in International Conference on Machine Learning.   PMLR, 2020, pp. 11 458–11 468.
  16. K. Wang, Y. Liang, X. Li, G. Li, B. Ghanem, R. Zimmermann, H. Yi, Y. Zhang, Y. Wang et al., “Brave the wind and the waves: Discovering robust and generalizable graph lottery tickets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  17. S. D. Chowdhury, Z. Ni, Q. Peng, S. Kundu, and P. Nuzzo, “Sparse but strong: Crafting adversarially robust graph lottery tickets,” arXiv preprint arXiv:2312.06568, 2023.
  18. X. Zhang and M. Zitnik, “Gnnguard: Defending graph neural networks against adversarial attacks,” Advances in neural information processing systems, vol. 33, pp. 9263–9275, 2020.
  19. H. Chang, Y. Rong, T. Xu, Y. Bian, S. Zhou, X. Wang, J. Huang, and W. Zhu, “Not all low-pass filters are robust in graph convolutional networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 25 058–25 071, 2021.
  20. S. Geisler, T. Schmidt, H. Şirin, D. Zügner, A. Bojchevski, and S. Günnemann, “Robustness of graph neural networks at scale,” Advances in Neural Information Processing Systems, vol. 34, pp. 7637–7649, 2021.
  21. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  22. J. MacQueen et al., “Some methods for classification and analysis of multivariate observations,” in Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. 1, no. 14.   Oakland, CA, USA, 1967, pp. 281–297.
  23. F. Mujkanovic, S. Geisler, S. Günnemann, and A. Bojchevski, “Are defenses for graph neural networks robust?” Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.
  24. Q. Zheng, X. Zou, Y. Dong, Y. Cen, D. Yin, J. Xu, Y. Yang, and J. Tang, “Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning,” arXiv preprint arXiv:2111.04314, 2021.
  25. D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networks against adversarial attacks,” in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 1399–1407.
  26. W. Feng, J. Zhang, Y. Dong, Y. Han, H. Luan, Q. Xu, Q. Yang, E. Kharlamov, and J. Tang, “Graph random neural networks for semi-supervised learning on graphs,” Advances in neural information processing systems, vol. 33, pp. 22 092–22 103, 2020.
  27. W. Jin, Y. Ma, X. Liu, X. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020, pp. 66–74.
  28. C. Zhang, Y. Tian, M. Ju, Z. Liu, Y. Ye, N. Chawla, and C. Zhang, “Chasing all-round graph representation robustness: Model, training, and optimization,” in The Eleventh International Conference on Learning Representations, 2023.
  29. Y. Tian, C. Zhang, Z. Guo, X. Zhang, and N. Chawla, “Learning mlps on graphs: A unified view of effectiveness, robustness, and efficiency,” in The Eleventh International Conference on Learning Representations, 2023.
  30. B. Xie, H. Chang, Z. Zhang, X. Wang, D. Wang, Z. Zhang, R. Ying, and W. Zhu, “Adversarially robust neural architecture search for graph neural networks,” arXiv preprint arXiv:2304.04168, 2023.
  31. D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 2847–2856.
  32. M. R. Ganesh, S. Y. Sekeh, and J. J. Corso, “Can deep networks be highly performant, efficient and robust simultaneously?” British Machine Vision Conference (BMVC), 2023.
  33. S. Si, F. Yu, A. S. Rawat, C.-J. Hsieh, and S. Kumar, “Serving graph compression for graph neural networks,” in The Eleventh International Conference on Learning Representations, 2022.
  34. C. Wiedeman and G. Wang, “Disrupting adversarial transferability in deep neural networks,” Patterns, vol. 3, no. 5, 2022.
  35. J. Ma, S. Ding, and Q. Mei, “Towards more practical adversarial attacks on graph neural networks,” Advances in neural information processing systems, vol. 33, pp. 4756–4766, 2020.
  36. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
  37. Z. Yang, W. Cohen, and R. Salakhudinov, “Revisiting semi-supervised learning with graph embeddings,” in International conference on machine learning.   PMLR, 2016, pp. 40–48.
  38. A. K. McCallum, K. Nigam, J. Rennie, and K. Seymore, “Automating the construction of internet portals with machine learning,” Information Retrieval, vol. 3, pp. 127–163, 2000.
  39. C. L. Giles, K. D. Bollacker, and S. Lawrence, “Citeseer: An automatic citation indexing system,” in Proceedings of the third ACM conference on Digital libraries, 1998, pp. 89–98.
  40. D. Eppstein, M. S. Paterson, and F. F. Yao, “On nearest-neighbor graphs,” Discrete Comput. Geom., vol. 17, no. 3, p. 263–282, apr 1997. [Online]. Available: https://doi.org/10.1007/PL00009293
  41. L. J. Guibas, D. E. Knuth, and M. Sharir, “Randomized incremental construction of delaunay and voronoi diagrams,” Algorithmica, vol. 7, no. 1-6, pp. 381–413, 1992.
  42. K. Xu, C. Li, Y. Tian, T. Sonobe, K.-i. Kawarabayashi, and S. Jegelka, “Representation learning on graphs with jumping knowledge networks,” in International conference on machine learning.   PMLR, 2018, pp. 5453–5462.
  43. H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang, “A restricted black-box adversarial framework towards attacking graph embedding models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 3389–3396.
  44. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
  45. J. Chen, Y. Wu, X. Xu, Y. Chen, H. Zheng, and Q. Xuan, “Fast gradient attack on network embedding,” 2018.
  46. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” 2015.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets