Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PAC-Bayesian Adversarially Robust Generalization Bounds for Graph Neural Network (2402.04038v2)

Published 6 Feb 2024 in stat.ML and cs.LG

Abstract: Graph neural networks (GNNs) have gained popularity for various graph-related tasks. However, similar to deep neural networks, GNNs are also vulnerable to adversarial attacks. Empirical studies have shown that adversarially robust generalization has a pivotal role in establishing effective defense algorithms against adversarial attacks. In this paper, we contribute by providing adversarially robust generalization bounds for two kinds of popular GNNs, graph convolutional network (GCN) and message passing graph neural network, using the PAC-Bayesian framework. Our result reveals that spectral norm of the diffusion matrix on the graph and spectral norm of the weights as well as the perturbation factor govern the robust generalization bounds of both models. Our bounds are nontrivial generalizations of the results developed in (Liao et al., 2020) from the standard setting to adversarial setting while avoiding exponential dependence of the maximum node degree. As corollaries, we derive better PAC-Bayesian robust generalization bounds for GCN in the standard setting, which improve the bounds in (Liao et al., 2020) by avoiding exponential dependence on the maximum node degree.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Neural network learning: Theoretical foundations. Cambridge University Press, 2009.
  2. Adversarial learning guarantees for linear hypotheses and neural networks. In International Conference on Machine Learning, pp. 431–441. PMLR, 2020.
  3. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
  4. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, volume 30, 2017.
  5. Adversarial attacks on node embeddings via graph poisoning. In Proceedings of the 36th International Conference on Machine Learning, pp.  695–704. PMLR, 2019.
  6. Discriminative embeddings of latent variable models for structured data. In Proceedings of The 33rd International Conference on Machine Learning, pp.  2702–2711. PMLR, 2016.
  7. Adversarial attack on graph structured data. In Proceedings of the 35th International Conference on Machine Learning, pp.  1115–1124. PMLR, 2018.
  8. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. In Advances in Neural Information Processing Systems, volume 32, 2019.
  9. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp.  169–177, 2020.
  10. Learning theory can (sometimes) explain generalisation in graph neural networks. In Advances in Neural Information Processing Systems, volume 34, pp.  27043–27056. Curran Associates, Inc., 2021.
  11. Generalizable adversarial training via spectral normalization. In Proceedings of the 7th International Conference on Learning Representations, 2019.
  12. Theoretical investigation of generalization bounds for adversarial learning of deep neural networks. Journal of Statistical Theory and Practice, 15(2):51, 2021.
  13. Generalization and representational limits of graph neural networks. In Proceedings of the 37th International Conference on Machine Learning, pp.  3419–3430. PMLR, 2020.
  14. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pp. 1263–1272. PMLR, 2017.
  15. Size-independent sample complexity of neural networks. In Proceedings of the 31st Conference On Learning Theory, pp. 297–299. PMLR, 2018.
  16. Explaining and harnessing adversarial examples. In Proceedings of the 3rd International Conference on Learning Representations, 2015.
  17. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, volume 31, 2018.
  18. Junction tree variational autoencoder for molecular graph generation. In International Conference on Machine Learning, pp. 2323–2332. PMLR, 2018.
  19. Learning multimodal graph-to-graph translation for molecular optimization. In International Conference on Learning Representations, 2019.
  20. Generalization in graph neural networks: Improved PAC-bayesian bounds on graph diffusion. In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, pp.  6314–6341. PMLR, 2023.
  21. Adversarial risk bounds via function transformation. arXiv preprint arXiv:1810.09519, 2018.
  22. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
  23. A PAC-bayesian approach to generalization bounds for graph neural networks. In International Conference on Learning Representations, 2020.
  24. A unified framework for data poisoning attack to graph-based semi-supervised learning. In Advances in Neural Information Processing Systems, volume 32, 2019.
  25. McAllester, D. Simplified PAC-bayesian margin bounds. In Learning Theory and Kernel Machines: 16th Annual Conference on Learning Theory and 7th Kernel Workshop, COLT/Kernel 2003, Washington, DC, USA, August 24-27, 2003. Proceedings, pp.  203–215. Springer, 2003.
  26. On the generalization analysis of adversarial learning. In International Conference on Machine Learning, pp. 16174–16196. PMLR, 2022.
  27. Norm-based capacity control in neural networks. In Conference on Learning Theory, pp.  1376–1401. PMLR, 2015.
  28. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564, 2017.
  29. Joint structure feature exploration and regularization for multi-task graph classification. IEEE Transactions on Knowledge and Data Engineering, 28(3):715–728, 2015.
  30. Task sensitive feature exploration and learning for multitask graph classification. IEEE Transactions on Cybernetics, 47(3):744–758, 2016.
  31. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018.
  32. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009.
  33. The vapnik–chervonenkis dimension of graph and recursive neural networks. Neural Networks, 108:248–259, 2018.
  34. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  35. Transferring robustness for graph neural network against poisoning attacks. In Proceedings of the 13th International Conference on Web search and Data Mining, pp.  600–608, 2020.
  36. Tropp, J. A. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12:389–434, 2012.
  37. Stability and generalization of graph convolutional neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  1539–1548, 2019.
  38. Adversarial examples for graph data: deep insights into attack and defense. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp.  4816–4823, 2019.
  39. Adversarial rademacher complexity of deep neural networks. arXiv preprint arXiv:2211.14966, 2022.
  40. PAC-bayesian adversarially robust generalization bounds for deep neural networks. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  41. Rademacher complexity for adversarially robust generalization. In International Conference on Machine Learning, pp. 7085–7094. PMLR, 2019.
  42. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, volume 31, 2018.
  43. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  44. GNNGuard: Defending graph neural networks against adversarial attacks. In Advances in Neural Information Processing Systems, volume 33, pp.  9263–9275. Curran Associates, Inc., 2020.
  45. Robust graph convolutional networks against adversarial attacks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  1399–1407, 2019.
  46. Adversarial attacks on graph neural networks: Perturbations and their patterns. ACM Transactions on Knowledge Discovery from Data (TKDD), 14(5):1–31, 2020.
  47. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.  2847–2856. Association for Computing Machinery, 2018.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tan Sun (1 paper)
  2. Junhong Lin (29 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.