Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainability-Based Adversarial Attack on Graphs Through Edge Perturbation (2312.17301v1)

Published 28 Dec 2023 in cs.CR and cs.LG

Abstract: Despite the success of graph neural networks (GNNs) in various domains, they exhibit susceptibility to adversarial attacks. Understanding these vulnerabilities is crucial for developing robust and secure applications. In this paper, we investigate the impact of test time adversarial attacks through edge perturbations which involve both edge insertions and deletions. A novel explainability-based method is proposed to identify important nodes in the graph and perform edge perturbation between these nodes. The proposed method is tested for node classification with three different architectures and datasets. The results suggest that introducing edges between nodes of different classes has higher impact as compared to removing edges among nodes within the same class.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and S. Y. Philip, “A comprehensive survey on graph neural networks,” IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020.
  2. J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI open, vol. 1, pp. 57–81, 2020.
  3. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  4. H. Xu, Y. Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain, “Adversarial attacks and defenses in images, graphs and text: A review,” International Journal of Automation and Computing, vol. 17, pp. 151–178, 2020.
  5. X. Wang, M. Cheng, J. Eaton, C.-J. Hsieh, and S. F. Wu, “Fake node attacks on graph convolutional networks,” Journal of Computational and Cognitive Engineering, vol. 1, no. 4, pp. 165–173, 2022.
  6. K. Xu, H. Chen, S. Liu, P.-Y. Chen, T.-W. Weng, M. Hong, and X. Lin, “Topology attack and defense for graph neural networks: An optimization perspective,” arXiv preprint arXiv:1906.04214, 2019.
  7. X. Zang, Y. Xie, J. Chen, and B. Yuan, “Graph universal adversarial attacks: A few bad actors ruin graph learning models,” arXiv preprint arXiv:2002.04784, 2020.
  8. S. Tao, H. Shen, Q. Cao, L. Hou, and X. Cheng, “Adversarial immunization for improving certifiable robustness on graphs,” arXiv preprint arXiv:2007.09647, 2020.
  9. Y. Ma, S. Wang, T. Derr, L. Wu, and J. Tang, “Attacking graph convolutional networks via rewiring,” arXiv preprint arXiv:1906.03750, 2019.
  10. H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in graph neural networks: A taxonomic survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 5, pp. 5782–5799, 2022.
  11. C. Agarwal, O. Queen, H. Lakkaraju, and M. Zitnik, “Evaluating explainability for graph neural networks,” Scientific Data, vol. 10, no. 1, p. 144, 2023.
  12. J. Xu, M. Xue, and S. Picek, “Explainability-based backdoor attacks against graph neural networks,” in Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, 2021, pp. 31–36.
  13. D. Zügner, A. Akbarnejad, and S. Günnemann, “Adversarial attacks on neural networks for graph data,” in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 2847–2856.
  14. L. Chen, J. Li, J. Peng, T. Xie, Z. Cao, K. Xu, X. He, Z. Zheng, and B. Wu, “A survey of adversarial learning on graphs,” arXiv preprint arXiv:2003.05730, 2020.
  15. W. Jin, Y. Li, H. Xu, Y. Wang, S. Ji, C. Aggarwal, and J. Tang, “Adversarial attacks and defenses on graphs,” ACM SIGKDD Explorations Newsletter, vol. 22, no. 2, pp. 19–34, 2021.
  16. L. Sun, Y. Dou, C. Yang, K. Zhang, J. Wang, S. Y. Philip, L. He, and B. Li, “Adversarial attack and defense on graph data: A survey,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  17. X. Xu, Y. Yu, B. Li, L. Song, C. Liu, and C. Gunter, “Characterizing malicious edges targeting on graph neural networks,” 2018.
  18. H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, and L. Song, “Adversarial attack on graph structured data,” in International conference on machine learning.   PMLR, 2018, pp. 1115–1124.
  19. N. Entezari, S. A. Al-Sayouri, A. Darvishzadeh, and E. E. Papalexakis, “All you need is low (rank) defending against adversarial attacks on graphs,” in Proceedings of the 13th International Conference on Web Search and Data Mining, 2020, pp. 169–177.
  20. J. Chen, Y. Wu, X. Xu, Y. Chen, H. Zheng, and Q. Xuan, “Fast gradient attack on network embedding,” arXiv preprint arXiv:1809.02797, 2018.
  21. Y. Sun, S. Wang, X. Tang, T.-Y. Hsieh, and V. Honavar, “Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach,” in Proceedings of the Web Conference 2020, 2020, pp. 673–683.
  22. T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  23. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  24. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  25. W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” Advances in neural information processing systems, vol. 30, 2017.
  26. X. Wang, M. Cheng, J. Eaton, C.-J. Hsieh, and F. Wu, “Attack graph convolutional networks by adding fake nodes,” arXiv preprint arXiv:1810.10751, 2018.
  27. H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, and L. Zhu, “Adversarial examples on graph data: Deep insights into attack and defense,” arXiv preprint arXiv:1903.01610, 2019.
  28. X. Tang, Y. Li, Y. Sun, H. Yao, P. Mitra, and S. Wang, “Transferring robustness for graph neural network against poisoning attacks,” in Proceedings of the 13th international conference on web search and data mining, 2020, pp. 600–608.
  29. X. Liu, S. Si, X. Zhu, Y. Li, and C.-J. Hsieh, “A unified framework for data poisoning attack to graph-based semi-supervised learning,” arXiv preprint arXiv:1910.14147, 2019.
  30. Q. Zhou, Y. Ren, T. Xia, L. Yuan, and L. Chen, “Data poisoning attacks on graph convolutional matrix completion,” in International Conference on Algorithms and Architectures for Parallel Processing.   Springer, 2019, pp. 427–439.
  31. H. Zhang, T. Zheng, J. Gao, C. Miao, L. Su, Y. Li, and K. Ren, “Towards data poisoning attack against knowledge graph embedding,” arXiv preprint arXiv:1904.12052, 2019.
  32. Z. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “Gnnexplainer: Generating explanations for graph neural networks,” Advances in neural information processing systems, vol. 32, 2019.
  33. D. Luo, W. Cheng, D. Xu, W. Yu, B. Zong, H. Chen, and X. Zhang, “Parameterized explainer for graph neural network,” Advances in neural information processing systems, vol. 33, pp. 19 620–19 631, 2020.
  34. P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi-Rad, “Collective classification in network data,” AI magazine, vol. 29, no. 3, pp. 93–93, 2008.
  35. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  36. C. Cai and Y. Wang, “A note on over-smoothing for graph neural networks,” arXiv preprint arXiv:2006.13318, 2020.
  37. D. Zügner and S. Günnemann, “Adversarial attacks on graph neural networks via meta learning. ICLR,” 2019.
  38. B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of social representations,” in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp. 701–710.
  39. T. Pham, T. Tran, D. Phung, and S. Venkatesh, “Column networks for collective classification,” in Proceedings of the AAAI conference on artificial intelligence, vol. 31, no. 1, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.