Papers
Topics
Authors
Recent
2000 character limit reached

Verifying message-passing neural networks via topology-based bounds tightening

Published 21 Feb 2024 in math.OC and cs.LG | (2402.13937v2)

Abstract: Since graph neural networks (GNNs) are often vulnerable to attack, we need to know when we can trust them. We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs) using a Rectified Linear Unit (ReLU) activation function. Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems, for example it admits (i) both adding and removing edges, (ii) both global and local budgets, and (iii) both topological perturbations and feature modifications. Our key technology, topology-based bounds tightening, uses graph structure to tighten bounds. We also experiment with aggressive bounds tightening to dynamically change the optimization constraints by tightening variable bounds. To demonstrate the effectiveness of these strategies, we implement an extension to the open-source branch-and-cut solver SCIP. We test on both node and graph classification problems and consider topological attacks that both add and remove edges.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. Strong mixed-integer programming formulations for trained neural networks. Mathematical Programming, 183(1-2):3–39, 2020.
  2. Computational tradeoffs of optimization-based bound tightening in relu networks, 2023.
  3. On handling indicator constraints in mixed integer programming. Computational Optimization and Applications, 65:545–566, 2016.
  4. Enabling research through the SCIP optimization suite 8.0. ACM Transactions on Mathematical Software, 49(2):1–21, 2023.
  5. Certifiable robustness to graph perturbations. NeurIPS, 2019.
  6. Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. In ICML, 2020.
  7. Efficient verification of ReLU-based neural networks via dependency analysis. In AAAI, 2020.
  8. Using diversification, communication and parallelism to solve mixed-integer linear programs. Operations Research Letters, 42(2):186–189, 2014.
  9. OMLT: Optimization & machine learning toolkit. Journal of Machine Learning Research, 23(349):1–8, 2022.
  10. Adversarial detection on graph structured data. In PPMLP, 2020.
  11. Certified adversarial robustness via randomized smoothing. In ICML, 2019.
  12. Adversarial attack on graph structured data. In ICML, 2018.
  13. Fast graph representation learning with PyTorch Geometric. In ICLR 2019 Workshop on Representation Learning on Graphs and Manifolds, 2019.
  14. Deep neural networks and mixed integer linear optimization. Constraints, 23(3):296–309, 2018.
  15. The SCIP optimization suite 7.0. Technical Report 20-10, ZIB, Takustr. 7, 14195 Berlin, 2020.
  16. Predict then propagate: Graph neural networks meet personalized pagerank. In ICLR, 2019.
  17. Robustness of graph neural networks at scale. NeurIPS, 2021.
  18. Günnemann, S. Graph neural networks: Adversarial robustness. Graph Neural Networks: Foundations, Frontiers, and Applications, pp.  149–176, 2022.
  19. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. URL https://www.gurobi.com.
  20. Inductive representation learning on large graphs. In NeurIPS, 2017.
  21. Certified robustness of graph convolution networks for graph classification under topological attacks. NeurIPS, 2020.
  22. Certifying robust graph classification under orthogonal Gromov-Wasserstein threats. NeurIPS, 2022.
  23. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
  24. An approach to reachability analysis for feed-forward ReLU neural networks. arXiv preprint arXiv:1706.07351, 2017.
  25. Towards more practical adversarial attacks on graph neural networks. NeurIPS, 2020.
  26. Mixed-integer optimisation of graph neural networks for computer-aided molecular design. arXiv preprint arXiv:2312.01228, 2023.
  27. Tudataset: A collection of benchmark datasets for learning with graphs. In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020.
  28. PRIMA: general and precise neural network certification via scalable convex hull approximations. Proceedings of the ACM on Programming Languages, 6(POPL):1–33, 2022.
  29. Fundamental limits in formal verification of message-passing neural networks. In ICLR, 2023.
  30. Beyond the single neuron convex barrier for neural network certification. In NeurIPS, pp.  15098–15109, 2019.
  31. Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach. In WWW, 2020.
  32. Takahashi, T. Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In IEEE Big Data, 2019.
  33. The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification. In NeurIPS, volume 33, pp.  21675–21686, 2020.
  34. Evaluating robustness of neural networks with mixed integer programming. In ICLR, 2019.
  35. Partition-based formulations for mixed-integer optimization of trained ReLU neural networks. In NeurIPS, 2021.
  36. PySCIPOpt-ML: Embedding trained machine learning models into mixed-integer programs. arXiv preprint arXiv:2312.08074, 2023.
  37. Certified robustness of graph neural networks against adversarial structural perturbation. In SIGKDD, 2021.
  38. Scalable attack on graph data by injecting vicious nodes. Data Mining and Knowledge Discovery, 34:1363–1389, 2020.
  39. Topology attack and defense for graph neural networks: An optimization perspective. IJCAI, 2019.
  40. PANE: scalable and effective attributed network embedding. The VLDB Journal, pp.  1–26, 2023.
  41. Augmenting optimization-based molecular design with graph neural networks. arXiv preprint arXiv:2312.03613, 2023a.
  42. Optimizing over trained GNNs via symmetry breaking. In NeurIPS, 2023b.
  43. Bound tightening using rolling-horizon decomposition for neural network verification. arXiv preprint arXiv:2401.05280, 2024.
  44. Certifiable robustness and robust training for graph convolutional networks. In SIGKDD, 2019a.
  45. Adversarial attacks on graph neural networks via meta learning. In ICLR, 2019b.
  46. Certifiable robustness of graph convolutional networks under structure perturbations. In SIGKDD, 2020.
  47. Adversarial attacks on neural networks for graph data. In SIGKDD, 2018.
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 11 likes about this paper.