Structure-Aware Robustness Certificates for Graph Classification (2306.11915v2)
Abstract: Certifying the robustness of a graph-based machine learning model poses a critical challenge for safety. Current robustness certificates for graph classifiers guarantee output invariance with respect to the total number of node pair flips (edge addition or edge deletion), which amounts to an $l_{0}$ ball centred on the adjacency matrix. Although theoretically attractive, this type of isotropic structural noise can be too restrictive in practical scenarios where some node pairs are more critical than others in determining the classifier's output. The certificate, in this case, gives a pessimistic depiction of the robustness of the graph model. To tackle this issue, we develop a randomised smoothing method based on adding an anisotropic noise distribution to the input graph structure. We show that our process generates structural-aware certificates for our classifiers, whereby the magnitude of robustness certificates can vary across different pre-defined structures of the graph. We demonstrate the benefits of these certificates in both synthetic and real-world experiments.
- Certifiable robustness to graph perturbations. In Advances in Neural Information Processing Systems, pages 8319–8330, 2019.
- Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. International Conference on Machine Learning, pages 1003–1013, 2020.
- T Tony Cai. One-sided confidence intervals in discrete distributions. Journal of Statistical planning and inference, 131(1):63–88, 2005.
- Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pages 1310–1320. PMLR, 2019.
- Fast neighborhood subgraph pairwise distance kernel. In Proceedings of the 26th International Conference on Machine Learning, pages 255–262. Omnipress; Madison, WI, USA, 2010.
- Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991.
- Learning to solve combinatorial optimization problems on real-world graphs in linear time. 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 19–24, 2020.
- All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 169–177, 2020.
- Certified robustness of graph classification against topology attack with randomized smoothing. GLOBECOM 2020-2020 IEEE Global Communications Conference, pages 1–6, 2020.
- Edgar N Gilbert. Random graphs. The Annals of Mathematical Statistics, 30(4):1141–1144, 1959.
- Structure-based protein function prediction using graph convolutional networks. Nature communications, 12(1):1–14, 2021.
- Scalable verified training for provably robust image classification. Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4842–4851, 2019.
- Stephan Günnemann. ECML-PKDD 2020 Keynote: Certifiable robustness of ml models for graphs. https://www.youtube.com/watch?v=HkISG9bdAl0&t=2571s, 2020.
- Training certifiably robust neural networks with efficient local lipschitz bounds. Advances in Neural Information Processing Systems, 34:22745–22757, 2021.
- Certified robustness of community detection against adversarial structural perturbation via randomized smoothing. Proceedings of The Web Conference 2020, pages 2718–2724, 2020.
- Certified robustness of graph convolution networks for graph classification under topological attacks. Advances in neural information processing systems, 33:8463–8474, 2020.
- Certifying robust graph classification under orthogonal gromov-wasserstein threats. In Advances in Neural Information Processing Systems, 2022.
- Certified robustness to adversarial examples with differential privacy. IEEE Symposium on Security and Privacy (SP), pages 656–672, 2019.
- Tight certificates of adversarial robustness for randomly smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019.
- Decoupled weight decay regularization. International Conference on Learning Representations, ICLR, 2019.
- Semidefinite relaxations for certifying robustness to adversarial examples. Advances in Neural Information Processing Systems, 31, 2018.
- Efficient graphlet kernels for large graph comparison. In Artificial intelligence and statistics, pages 488–495. PMLR, 2009.
- Second-order provable defenses against adversarial attacks. In International conference on machine learning, pages 8981–8991. PMLR, 2020.
- Halting in random walk kernels. Advances in neural information processing systems, 28, 2015.
- Causal attention for interpretable and generalizable graph classification. In ACM SIGKDD Conference on Knowledge Discovery and Data MiningAugust, page 1696–1705, 2022.
- Certified robustness of graph neural networks against adversarial structural perturbation. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 1645–1653, 2021.
- Semi-supervised classification with graph convolutional networks. International Conference on Learning Representations, ICLR, 2017.
- Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pages 5286–5295. PMLR, 2018.
- Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610, 2019.
- Graph neural networks for natural language processing: A survey. Foundations and Trends® in Machine Learning, 16(2):119–328, 2023.
- D. Zügner and S. Günnemann. Certifiable robustness and robust training for graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 246–256, 2019.