Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Minimum Topology Attacks for Graph Neural Networks (2403.02723v1)

Published 5 Mar 2024 in cs.AI

Abstract: With the great popularity of Graph Neural Networks (GNNs), their robustness to adversarial topology attacks has received significant attention. Although many attack methods have been proposed, they mainly focus on fixed-budget attacks, aiming at finding the most adversarial perturbations within a fixed budget for target node. However, considering the varied robustness of each node, there is an inevitable dilemma caused by the fixed budget, i.e., no successful perturbation is found when the budget is relatively small, while if it is too large, the yielding redundant perturbations will hurt the invisibility. To break this dilemma, we propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node. To this end, we propose an attack model, named MiBTack, based on a dynamic projected gradient descent algorithm, which can effectively solve the involving non-convex constraint optimization on discrete topology. Extensive results on three GNNs and four real-world datasets show that MiBTack can successfully lead all target nodes misclassified with the minimum perturbation edges. Moreover, the obtained minimum budget can be used to measure node robustness, so we can explore the relationships of robustness, topology, and uncertainty for nodes, which is beyond what the current fixed-budget topology attacks can offer.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Lada A. Adamic and Natalie S. Glance. 2005. The political blogosphere and the 2004 U.S. election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, LinkKDD 2005, Chicago, Illinois, USA, August 21-25, 2005. ACM, 36–43.
  2. Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications. ArXiv abs/1910.13427 (2019).
  3. Nicholas Carlini and David A. Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP) (2017), 39–57.
  4. Not All Low-Pass Filters are Robust in Graph Convolutional Networks. In NeurIPS.
  5. A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models. AAAI.
  6. Understanding Structural Vulnerability in Graph Convolutional Networks. In IJCAI.
  7. Adversarial Attack on Graph Structured Data. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018 (Proceedings of Machine Learning Research, Vol. 80). PMLR, 1123–1132.
  8. Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers. ArXiv abs/2009.10233 (2020).
  9. Robustness of graph neural networks at scale. Advances in Neural Information Processing Systems 34 (2021), 7637–7649.
  10. Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572 (2015).
  11. Structack: Structure-based Adversarial Attacks on Graph Neural Networks. Proceedings of the 32nd ACM Conference on Hypertext and Social Media (2021).
  12. Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
  13. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In ICLR.
  14. Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN. Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022).
  15. AN-GCN: An Anonymous Graph Convolutional Network Against Edge-Perturbing Attacks. IEEE transactions on neural networks and learning systems PP (2022).
  16. MGNNI: Multiscale Graph Neural Networks with Implicit Layers. In NeurIPS.
  17. EIGNN: Efficient Infinite-Depth Graph Neural Networks. In NeurIPS. 18762–18773.
  18. Graph Neural Networks with Adaptive Residual. In NeurIPS.
  19. Elastic Graph Neural Networks. In ICML.
  20. A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning. In NeurIPS.
  21. Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks. Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining (2022).
  22. Attacking Graph Convolutional Networks via Rewiring. CoRR abs/1906.03750 (2019).
  23. Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints. ArXiv abs/2102.12827 (2021).
  24. Improving Calibration through the Relationship with Adversarial Robustness.
  25. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses. CVPR (2019), 4317–4325.
  26. Curls & Whey: Boosting Black-Box Adversarial Attacks. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019), 6512–6520.
  27. Data Poisoning Attack against Unsupervised Node Embedding Methods. ArXiv abs/1810.12881 (2018).
  28. Xiyang Sun and Fumiyasu Komaki. 2023. BHGNN-RT: Network embedding for directed heterogeneous graphs. ArXiv abs/2311.14404 (2023). https://api.semanticscholar.org/CorpusID:265445839
  29. Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022), 13369–13377.
  30. Community Preserving Network Embedding. In AAAI Conference on Artificial Intelligence. https://api.semanticscholar.org/CorpusID:29154877
  31. Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration. ArXiv abs/2109.14285 (2021).
  32. Provably Robust Node Classification via Low-Pass Message Passing. In ICDM. 621–630.
  33. Cluster Attack: Query-based Adversarial Attacks on Graph with Graph-Dependent Priors. In IJCAI.
  34. Hiding individuals and communities in a social network. Nature Human Behaviour 2 (2017), 139–147.
  35. Simplifying Graph Convolutional Networks. ArXiv abs/1902.07153 (2019).
  36. Adversarial Examples for Graph Data: Deep Insights into Attack and Defense. In IJCAI.
  37. Speedup Robust Graph Structure Learning with Low-Rank Information. Proceedings of the 30th ACM International Conference on Information & Knowledge Management (2021).
  38. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective. ArXiv abs/1906.04214 (2019).
  39. Runze Yang and Teng Long. 2021. Derivative-free optimization adversarial attacks for graph convolutional networks. PeerJ Computer Science 7 (2021).
  40. Graph alternate learning for robust graph neural networks in node classification. Neural Comput. Appl. 34 (2022), 8723–8735.
  41. Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks. Proceedings of the 30th ACM International Conference on Information & Knowledge Management (2021).
  42. Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation. Proceedings of the ACM Web Conference 2022 (2022).
  43. Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. ArXiv abs/2111.04314 (2021).
  44. Graph Neural Networks: A Review of Methods and Applications. ArXiv abs/1812.08434 (2020).
  45. Robust Graph Convolutional Networks Against Adversarial Attacks. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (2019).
  46. Jun Zhuang and Mohammad al Hasan. 2022. Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision. In AAAI.
  47. Adversarial Attacks on Neural Networks for Graph Data. In KDD. 2847–2856.
  48. Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Citations (5)

Summary

We haven't generated a summary for this paper yet.