Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function (2009.00203v3)

Published 1 Sep 2020 in cs.CR and cs.LG

Abstract: Graph neural network (GNN), the mainstream method to learn on graph data, is vulnerable to graph evasion attacks, where an attacker slightly perturbing the graph structure can fool trained GNN models. Existing work has at least one of the following drawbacks: 1) limited to directly attack two-layer GNNs; 2) inefficient; and 3) impractical, as they need to know full or part of GNN model parameters. We address the above drawbacks and propose an influence-based \emph{efficient, direct, and restricted black-box} evasion attack to \emph{any-layer} GNNs. Specifically, we first introduce two influence functions, i.e., feature-label influence and label influence, that are defined on GNNs and label propagation (LP), respectively. Then we observe that GNNs and LP are strongly connected in terms of our defined influences. Based on this, we can then reformulate the evasion attack to GNNs as calculating label influence on LP, which is \emph{inherently} applicable to any-layer GNNs, while no need to know information about the internal GNN model. Finally, we propose an efficient algorithm to calculate label influence. Experimental results on various graph datasets show that, compared to state-of-the-art white-box attacks, our attack can achieve comparable attack performance, but has a 5-50x speedup when attacking two-layer GNNs. Moreover, our attack is effective to attack multi-layer GNNs\footnote{Source code and full version is in the link: \url{https://github.com/ventr1c/InfAttack}}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (55)
  1. Aleksandar Bojchevski and Stephan Günnemann. 2019a. Adversarial Attacks on Node Embeddings via Graph Poisoning. In ICML.
  2. Aleksandar Bojchevski and Stephan Günnemann. 2019b. Certifiable Robustness to Graph Perturbations. In NeurIPS.
  3. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In ICML.
  4. A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models.. In AAAI.
  5. Fast gradient attack on network embedding. arXiv (2018).
  6. Practical attacks against graph-based clustering. In CCS.
  7. Unnoticeable backdoor attacks on graph neural networks. In WWW.
  8. Adversarial attack on graph structured data. In ICML.
  9. Adversarial network embedding. In AAAI.
  10. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS.
  11. All you need is low (rank) defending against adversarial attacks on graphs. In WSDM.
  12. Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs. In HPCC.
  13. Robustness of graph neural networks at scale. NeurIPS.
  14. Reliable graph neural networks via robust aggregation. In NeurIPS.
  15. Neural message passing for quantum chemistry. In ICML.
  16. Inductive representation learning on large graphs. In NIPS.
  17. Open graph benchmark: Datasets for machine learning on graphs. NeurIPS.
  18. Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks. In NeurIPS.
  19. Graph structure learning for robust graph neural networks. In KDD.
  20. Thomas N Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In ICLR.
  21. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In ICML.
  22. Adversarial attack on community detection by hiding individuals. In WWW.
  23. Certifiably Robust Graph Contrastive Learning. In NeurIPS.
  24. A unified framework for data poisoning attack to graph-based semi-supervised learning. arXiv preprint arXiv:1910.14147 (2019).
  25. Towards More Practical Adversarial Attacks on Graph Neural Networks. In NeurIPS.
  26. A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. In CCS.
  27. Are Defenses for Graph Neural Networks Robust?. In NeurIPS.
  28. The graph neural network model. IEEE Transactions on Neural Networks (2008).
  29. Collective classification in network data. AI magazine (2008).
  30. Data poisoning attack against unsupervised node embedding methods. arXiv (2018).
  31. Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach. In The Web Conference.
  32. Tsubasa Takahashi. 2019. Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks. In IEEE BigData.
  33. Transferring Robustness for Graph Neural Network Against Poisoning Attacks. In WSDM.
  34. Adversarial Immunization for Certifiable Robustness on Graphs. In WSDM.
  35. MohamadAli Torkamani and Daniel Lowd. 2013. Convex adversarial collective classification. In ICML.
  36. Graph attention networks. In ICLR.
  37. Binghui Wang and Neil Zhenqiang Gong. 2019. Attacking Graph-based Classification via Manipulating the Graph Structure. In CCS.
  38. Certified robustness of graph neural networks against adversarial structural perturbation. In KDD.
  39. Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees. In CVPR.
  40. Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks. In CVPR.
  41. Simplifying graph convolutional networks. In ICML.
  42. Adversarial examples on graph data: Deep insights into attack and defense. In IJCAI.
  43. Graph backdoor. In ({normal-{\{{USENIX}normal-}\}} Security 21).
  44. Topology attack and defense for graph neural networks: An optimization perspective. In IJCAI.
  45. How powerful are graph neural networks?. In ICLR.
  46. Representation learning on graphs with jumping knowledge networks. In ICML.
  47. Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. In NeurIPS.
  48. Xiang Zhang and Marinka Zitnik. 2020. Gnnguard: Defending graph neural networks against adversarial attacks. In NeurIPS.
  49. Backdoor attacks to graph neural networks. (2021).
  50. Adversarial Attacks on Deep Graph Matching. In NeurIPS, Vol. 33.
  51. Robust graph convolutional networks against adversarial attacks. In KDD.
  52. Semi-supervised learning using gaussian fields and harmonic functions. In ICML.
  53. Jun Zhuang and Mohammad Al Hasan. 2022. Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision. In AAAI.
  54. Adversarial attacks on neural networks for graph data. In KDD.
  55. Daniel Zügner and Stephan Günnemann. 2019. Adversarial attacks on graph neural networks via meta learning. In ICLR.
Citations (13)

Summary

We haven't generated a summary for this paper yet.