Collective Certified Robustness against Graph Injection Attacks (2403.01423v1)
Abstract: We investigate certified robustness for GNNs under graph injection attacks. Existing research only provides sample-wise certificates by verifying each node independently, leading to very limited certifying performance. In this paper, we present the first collective certificate, which certifies a set of target nodes simultaneously. To achieve it, we formulate the problem as a binary integer quadratic constrained linear programming (BQCLP). We further develop a customized linearization technique that allows us to relax the BQCLP into linear programming (LP) that can be efficiently solved. Through comprehensive experiments, we demonstrate that our collective certification scheme significantly improves certification performance with minimal computational overhead. For instance, by solving the LP within 1 minute on the Citeseer dataset, we achieve a significant increase in the certified ratio from 0.0% to 81.2% when the injected node number is 5% of the graph size. Our step marks a crucial step towards making provable defense more practical.
- ApS, M. The MOSEK optimization toolbox for MATLAB manual. Version 9.0., 2019. URL http://docs.mosek.com/9.0/toolbox/index.html.
- Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815, 2017.
- Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In International Conference on Machine Learning, pp. 1003–1013. PMLR, 2020.
- Understanding and improving graph injection attack by promoting unnoticeability. In International Conference on Learning Representations, 2022.
- Certified adversarial robustness via randomized smoothing. In international conference on machine learning, pp. 1310–1320. PMLR, 2019.
- CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1–5, 2016.
- Reliable graph neural networks via robust aggregation. Advances in Neural Information Processing Systems, 33:13272–13284, 2020.
- Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263–1272. PMLR, 2017.
- Adversarial training for graph neural networks: Pitfalls, solutions, and new directions. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
- Certified robustness of community detection against adversarial structural perturbation via randomized smoothing. In Proceedings of The Web Conference 2020, pp. 2718–2724, 2020.
- Almost tight l0-norm certified robustness of top-k predictions against adversarial perturbations. In International Conference on Learning Representations, 2022.
- Pore: Provably robust recommender systems against data poisoning attacks. In 32nd USENIX Security Symposium (USENIX Security 23), pp. 1703–1720, 2023.
- Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 66–74, 2020.
- Let graph be the go board: gradient-free node injection attack for graph neural networks via reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4383–4390, 2023.
- Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
- Node-aware bi-smoothing: Certified robustness against graph injection attacks. arXiv preprint arXiv:2312.03979, 2023.
- Sok: Certified robustness for deep neural networks. In 2023 IEEE Symposium on Security and Privacy (SP), pp. 1289–1310. IEEE, 2023.
- Towards reasonable budget allocation in untargeted graph structure attacks via gradient debias. Advances in Neural Information Processing Systems, 35:27966–27977, 2022.
- Randomized message-interception smoothing: Gray-box certificates for graph neural networks. In Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=t0VbBTw-o8.
- Collective robustness certificates: Exploiting interdependence in graph neural networks. In International Conference on Learning Representations, 2020.
- Localized randomized smoothing for collective robustness certification. In International Conference on Learning Representations, 2023.
- Collective classification in network data. AI magazine, 29(3):93–93, 2008.
- Adversarial camouflage for node injection attack on graphs. Information Sciences, 649:119611, 2023.
- Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
- Graph attention networks. In International Conference on Learning Representations, 2018.
- Certified robustness of graph neural networks against adversarial structural perturbation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1645–1653, 2021.
- Wei, W. Tutorials on advanced optimization methods. arXiv preprint arXiv:2007.13545, 2020.
- Gcn-based user representation learning for unifying robust recommendation and fraudster detection. In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, pp. 689–698, 2020.
- Gnnguard: Defending graph neural networks against adversarial attacks. Advances in neural information processing systems, 33:9263–9275, 2020.
- Comparing and detecting adversarial attacks for graph deep learning. In Proc. Representation Learning on Graphs and Manifolds Workshop, Int. Conf. Learning Representations, New Orleans, LA, USA, 2019.
- Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 1399–1407, 2019.
- Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 2847–2856, 2018.
- Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bylnx209YX.