Graph Adversarial Immunization for Certifiable Robustness (2302.08051v2)
Abstract: Despite achieving great success, graph neural networks (GNNs) are vulnerable to adversarial attacks. Existing defenses focus on developing adversarial training or model modification. In this paper, we propose and formulate graph adversarial immunization, i.e., vaccinating part of graph structure to improve certifiable robustness of graph against any admissible adversarial attack. We first propose edge-level immunization to vaccinate node pairs. Unfortunately, such edge-level immunization cannot defend against emerging node injection attacks, since it only immunizes existing node pairs. To this end, we further propose node-level immunization. To avoid computationally intensive combinatorial optimization associated with adversarial immunization, we develop AdvImmune-Edge and AdvImmune-Node algorithms to effectively obtain the immune node pairs or nodes. Extensive experiments demonstrate the superiority of AdvImmune methods. In particular, AdvImmune-Node remarkably improves the ratio of robust nodes by 79%, 294%, and 100%, after immunizing only 5% of nodes. Furthermore, AdvImmune methods show excellent defensive performance against various attacks, outperforming state-of-the-art defenses. To the best of our knowledge, this is the first attempt to improve certifiable robustness from graph data perspective without losing performance on clean graphs, providing new insights into graph adversarial learning.
- T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in International Conference on Learning Representations, 2017.
- J. Klicpera, A. Bojchevski, and S. Günnemann, “Predict then propagate: Graph neural networks meet personalized pagerank,” in International Conference on Learning Representations, 2019.
- B. Xu, H. Shen, Q. Cao, Y. Qiu, and X. Cheng, “Graph wavelet neural network,” in International Conference on Learning Representations, 2019.
- Q. Cao, H. Shen, J. Gao, B. Wei, and X. Cheng, “Popularity prediction on social platforms with coupled graph neural networks,” in Proceedings of the 13th International Conference on Web Search and Data Mining, ser. WSDM ’20, 2020, pp. 70–78.
- W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin, “Graph neural networks for social recommendation,” in The World Wide Web Conference, ser. WWW ’19, 2019, pp. 417–426.
- X. Ma, J. Wu, S. Xue, J. Yang, C. Zhou, Q. Z. Sheng, H. Xiong, and L. Akoglu, “A comprehensive survey on graph anomaly detection with deep learning,” IEEE Transactions on Knowledge and Data Engineering, pp. 1–1, 2021.
- D. Cheng, X. Wang, Y. Zhang, and L. Zhang, “Graph neural network for fraud detection via spatial-temporal attention,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 8, pp. 3800–3813, 2022.
- H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarial attack on graph structured data,” in Proceedings of the 35th International Conference on Machine Learning, ser. ICML ’18, 2018, pp. 1123–1132.
- D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’18, 2018, pp. 2847–2856.
- A. Bojchevski and S. Günnemann, “Adversarial attacks on node embeddings via graph poisoning,” in Proceedings of the 36th International Conference on Machine Learning, ser. ICML ’19, 2019, pp. 695–704.
- S. Tao, Q. Cao, H. Shen, J. Huang, Y. Wu, and X. Cheng, “Single node injection attack against graph neural networks,” in Proceedings of the 30th ACM International Conference on Information and Knowledge Management, ser. CIKM ’21, 2021, p. 1794–1803.
- X. Zou, Q. Zheng, Y. Dong, X. Guan, E. Kharlamov, J. Lu, and J. Tang, “Tdgia: Effective injection attacks on graph neural networks,” in Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2021, p. 2461–2471.
- F. Feng, X. He, J. Tang, and T.-S. Chua, “Graph adversarial training: Dynamically regularizing based on graph structure,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 6, pp. 2493–2504, 2021.
- Q. Dai, X. Shen, L. Zhang, Q. Li, and D. Wang, “Adversarial training methods for network embedding,” in Proceedings of The Web Conference 2019, ser. WWW ’19, 2019, pp. 329–339.
- K. Kong, G. Li, M. Ding, Z. Wu, C. Zhu, B. Ghanem, G. Taylor, and T. Goldstein, “Robust optimization as data augmentation for large-scale graphs,” in Proceedings of the IEEE conference on computer vision and pattern recognition, ser. CVPR’22, 2022.
- X. Zhang and M. Zitnik, “Gnnguard: Defending graph neural networks against adversarial attacks,” in Proceedings of Neural Information Processing Systems, ser. NeurIPS ’20, 2020, pp. 9263–9275.
- D. Zhu, Z. Zhang, P. Cui, and W. Zhu, “Robust graph convolutional networks against adversarial attacks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’19, 2019, pp. 1399–1407.
- Y. Zhang, S. Khan, and M. Coates, “Comparing and detecting adversarial attacks for graph deep learning,” in Proc. Representation Learning on Graphs and Manifolds Workshop, Int. Conf. Learning Representations, New Orleans, LA, USA, ser. RLGM @ ICLR ’19, 2019.
- N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank): Defending against adversarial attacks on graphs,” in Proceedings of the 13th International Conference on Web Search and Data Mining, ser. WSDM ’20, 2020, pp. 169–177.
- W. Jin, Y. Ma, X. Liu, X.-F. Tang, S. Wang, and J. Tang, “Graph structure learning for robust graph neural networks,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’20, 2020.
- H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples for graph data: Deep insights into attack and defense,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence, ser. IJCAI ’19, 2019, pp. 4816–4823.
- K. Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin, “Topology attack and defense for graph neural networks: An optimization perspective,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence, ser. IJCAI ’19, 2019, pp. 3961–3967.
- A. Bojchevski and S. Günnemann, “Certifiable robustness to graph perturbations,” in Proceedings of Neural Information Processing Systems, ser. NeurIPS ’19, 2019, pp. 8319–8330.
- D. Zügner and S. Günnemann, “Certifiable robustness of graph convolutional networks under structure perturbations,” in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’20, 2020.
- Y. Liu, X. Xia, L. Chen, X. He, C. Yang, and Z. Zheng, “Certifiable robustness to discrete adversarial perturbations for factorization machines,” in Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, ser. SIGIR ’20, 2020, pp. 419–428.
- D. Zügner and S. Günnemann, “Adversarial attacks on graph neural networks via meta learning,” in International Conference on Learning Representations, 2019.
- A. Bojchevski, J. Klicpera, and S. Günnemann, “Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more,” in Proceedings of the 37th International Conference on Machine Learning, ser. ICML ’20, 2020, pp. 11 647–11 657.
- B. Wang, J. Jia, X. Cao, and N. Z. Gong, “Certified robustness of graph neural networks against adversarial structural perturbation,” in Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2021, pp. 1645–1653.
- J. Schuchardt, A. Bojchevski, J. Klicpera, and S. Günnemann, “Collective robustness certificates: Exploiting interdependence in graph neural networks,” in International Conference on Learning Representations, 2021.
- W. Jin, Y. Li, H. Xu, Y. Wang, and J. Tang, “Adversarial attacks and defenses on graphs: A review and empirical study,” ArXiv, vol. abs/2003.00653, 2020.
- S. Tao, H. Shen, Q. Cao, L. Hou, and X. Cheng, “Adversarial immunization for certifiable robustness on graphs,” in Proceedings of the 14th ACM International Conference on Web Search and Data Mining, ser. WSDM’21, 2021.
- Y. Sun, S. Wang, X.-F. Tang, T.-Y. Hsieh, and V. G. Honavar, “Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach,” in Proceedings of The Web Conference 2020, ser. WWW ’20, 2020, pp. 673–683.
- J. Wang, M. Luo, F. Suya, J. Li, Z. Yang, and Q. Zheng, “Scalable attack on graph data by injecting vicious nodes,” arXiv preprint arXiv:2004.13825, 2020.
- P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” in International Conference on Learning Representations, 2018.
- L. Page, S. Brin, R. Motwani, and T. Winograd, “The pagerank citation ranking : Bringing order to the web,” in The Web Conference, 1999. [Online]. Available: https://api.semanticscholar.org/CorpusID:1508503
- C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning, ser. ICML ’17, 2017, pp. 1126–1135.
- P. de Fouquieres, S. G. Schirmer, S. J. Glaser, and I. Kuprov, “Second order gradient ascent pulse engineering,” Journal of Magnetic Resonance, vol. 212, no. 2, pp. 412–417, 2011.
- J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. M. VanBriesen, and N. S. Glance, “Cost-effective outbreak detection in networks,” in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’07, 2007, pp. 420–429.
- H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. Prasanna, “GraphSAINT: Graph sampling based inductive learning method,” in International Conference on Learning Representations, 2020.
- H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, and J. Pei, “Trustworthy graph neural networks: Aspects, methods and trends,” ArXiv, vol. abs/2205.07424, 2022.
- D. Zügner and S. Günnemann, “Certifiable robustness and robust training for graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ser. KDD ’19, 2019, pp. 246–256.
- M. Girvan and M. E. Newman, “Community structure in social and biological networks,” Proceedings of the national academy of sciences, vol. 99, no. 12, pp. 7821–7826, 2002.
- L. Sun, J. Wang, P. S. Yu, and B. Li, “Adversarial attack and defense on graph data: A survey,” ArXiv, vol. abs/1812.10528, 2018.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018.
- W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proceedings of Neural Information Processing Systems, ser. NIPS ’17, 2017, pp. 1024–1034.
- B. Rieck, C. Bock, and K. Borgwardt, “A persistent weisfeiler-lehman procedure for graph classification,” in Proceedings of the 36th International Conference on Machine Learning, ser. ICML ’19, 2019, pp. 5448–5458.
- L. Chen, J. Li, J. Peng, T. Xie, Z. Cao, K. Xu, X. He, and Z. Zheng, “A survey of adversarial learning on graphs,” ArXiv, vol. abs/2003.05730, 2020.
- B. Wang and N. Z. Gong, “Attacking graph-based classification via manipulating the graph structure,” in Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’19, 2019, pp. 2023–2040.
- H. Zhang, X. Yuan, C. Zhou, and S. Pan, “Projective ranking-based gnn evasion attacks,” IEEE Transactions on Knowledge and Data Engineering, 2022.
- D. Zügner, O. Borchert, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on graph neural networks: Perturbations and their patterns,” ACM Trans. Knowl. Discov. Data, vol. 14, no. 5, 2020.
- J. Li, T. Xie, C. Liang, F. Xie, X. He, and Z. Zheng, “Adversarial attack on large scale graph,” IEEE Transactions on Knowledge and Data Engineering, pp. 1–1, 2021.
- H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, W. Zhu, and J. Huang, “A restricted black-box adversarial framework towards attacking graph embedding models,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence.
- H. Chang, Y. Rong, T. Xu, W. Huang, H. Zhang, P. Cui, X. Wang, W. Zhu, and J. Huang, “Adversarial attack framework on graph embedding models with limited knowledge,” IEEE Transactions on Knowledge and Data Engineering, pp. 1–1, 2022.
- Z. Xu, B. Du, and H. Tong, “Graph sanitation with application to node classification,” in Proceedings of The Web Conference 2022, ser. WWW ’22, 2022, pp. 1136–1147.
- Y. Zhang, S. Pal, M. Coates, and D. Üstebay, “Bayesian graph convolutional neural networks for semi-supervised classification,” in Proceedings of the 33th AAAI Conference on Artificial Intelligence.
- X.-F. Tang, Y. Li, Y. Sun, H. Yao, P. Mitra, and S. Wang, “Transferring robustness for graph neural network against poisoning attacks,” in Proceedings of the 13th International Conference on Web Search and Data Mining, ser. WSDM ’20, 2020, pp. 600–608.
- J. Jia, B. Wang, X. Cao, and N. Z. Gong, “Certified robustness of community detection against adversarial structural perturbation via randomized smoothing,” in Proceedings of The Web Conference 2020, ser. WWW ’20, 2020, pp. 2718–2724.