Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions (2306.15427v2)
Abstract: Despite its success in the image domain, adversarial training did not (yet) stand out as an effective defense for Graph Neural Networks (GNNs) against graph structure perturbations. In the pursuit of fixing adversarial training (1) we show and overcome fundamental theoretical as well as practical limitations of the adopted graph learning setting in prior work; (2) we reveal that more flexible GNNs based on learnable graph diffusion are able to adjust to adversarial perturbations, while the learned message passing scheme is naturally interpretable; (3) we introduce the first attack for structure perturbations that, while targeting multiple nodes at once, is capable of handling global (graph-level) as well as local (node-level) constraints. Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.
- Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
- Topology attack and defense for graph neural networks: An optimization perspective. In IJCAI, 2019.
- Towards an efficient and general framework of robust training for graph neural networks. In ICASSP, 2020.
- Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
- Combining neural networks with personalized pagerank for classification on graphs. In ICLR, 2019.
- Revisiting robustness in graph machine learning. In ICLR, 2023.
- Adversarial attacks on neural networks for graph data. In KDD, 2018.
- Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI, 2018.
- Semi-Supervised Learning (Adaptive Computation and Machine Learning). 2006.
- Revisiting semi-supervised learning with graph embeddings. In ICML, 2016.
- Open graph benchmark: Datasets for machine learning on graphs. In NeurIPS, 2020.
- Stephan Günnemann. Graph neural networks: Adversarial robustness. In Graph Neural Networks: Foundations, Frontiers, and Applications. 2022.
- Adaptive universal generalized pagerank graph neural network. In ICLR, 2021.
- Convolutional neural networks on graphs with chebyshev approximation, revisited. In NeurIPS, 2022.
- Analyzing the expressive power of graph neural networks in a spectral perspective. In ICLR, 2021.
- W.W. Zachary. An information flow model for conflict and fission in small groups. Journal of Anthropological Research, 1977.
- Robustness of graph neural networks at scale. In NeurIPS, 2021.
- Intriguing properties of neural networks. In ICLR, 2014.
- Analyzing data-centric properties for graph contrastive learning. In NeurIPS, 2022.
- First-order methods for constrained optimization. In Optimization for Data Analysis. 2022.
- Knapsack Problems. 2004.
- Wild patterns: Ten years after the rise of adversarial machine learning. In ACM SIGSAC Conference on Computer and Communications Security, 2018.
- Deep gaussian embedding of graphs: Unsupervised inductive learning via ranking. In ICLR, 2018.
- Collective classification in network data. AI Magazine, 2008.
- Wiki-cs: A wikipedia-based benchmark for graph neural networks. arXiv:2007.02901, 2020.
- Benchmarking graph neural networks, 2022.
- A critical look at the evaluation of GNNs under heterophily: Are we really making progress? In ICLR, 2023.
- Contextual stochastic block models. NeurIPS, 2018.
- Graph attention networks. In ICLR, 2018.
- Graph random neural networks for semi-supervised learning on graphs. In NeurIPS, 2020.
- Are defenses for graph neural networks robust? In NeurIPS, 2022.
- Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In ICML, 2020.
- Stability properties of graph neural networks. IEEE Trans. Signal Process., 2020.
- Batch virtual adversarial training for graph convolutional networks. In arXiv:1902.09192, 2019.
- Graph adversarial training: Dynamically regularizing based on graph structure. IEEE Transactions on Knowledge and Data Engineering, 2019.
- Latent adversarial training of graph convolution networks. In ICML Workshop on Learning and Reasoning with Graph-Structured Data, 2019.
- Smoothing adversarial training for gnn. IEEE Transactions on Computational Social Systems, 2021.
- Spectral adversarial training for robust graph neural network. In TKDE, 2022.
- Learning robust representation through graph adversarial contrastive learning. In Database Systems for Advanced Applications. 2022.
- Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning. NeurIPS, 2021.
- Randomized message-interception smoothing: Gray-box certificates for graph neural networks. In NeurIPS, 2022.
- Hiding individuals and communities in a social network. Nature Human Behaviour, 2, 2018.
- Certifiable robustness to graph perturbations. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, NeurIPS, 2019.
- Certifiable robustness of graph convolutional networks under structure perturbations. In Rajesh Gupta, Yan Liu, Jiliang Tang, and B. Aditya Prakash, editors, KDD, 2020.
- Collective robustness certificates: Exploiting interdependence in graph neural networks. In ICLR, 2021.
- Reliable graph neural networks via robust aggregation. In NeurIPS, 2020.
- Adversarial attacks on graph neural networks via meta learning. In ICLR, 2019.
- Robust graph convolutional networks against adversarial attacks. In KDD, 2019.
- Adversarial attack on graph structured data. In ICML, 2018.
- All you need is low (rank): Defending against adversarial attacks on graphs. In WSDM, 2020.
- Adversarial examples for graph data: Deep insights into attack and defense. In IJCAI, 2019.
- Gnnguard: Defending graph neural networks against adversarial attacks. In NeurIPS, 2020.
- Understanding and improving graph injection attack by promoting unnoticeability. In ICLR, 2022.
- Generalization of neural combinatorial solvers through the lens of adversarial robustness. In ICLR, 2022.
- Lukas Gosch (8 papers)
- Simon Geisler (24 papers)
- Daniel Sturm (3 papers)
- Bertrand Charpentier (21 papers)
- Daniel Zügner (23 papers)
- Stephan Günnemann (169 papers)