Tight Verification of Probabilistic Robustness in Bayesian Neural Networks (2401.11627v2)
Abstract: We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs). Computing robustness guarantees for BNNs is a significantly more challenging task than verifying the robustness of standard Neural Networks (NNs) because it requires searching the parameters' space for safe weights. Moreover, tight and complete approaches for the verification of standard NNs, such as those based on Mixed-Integer Linear Programming (MILP), cannot be directly used for the verification of BNNs because of the polynomial terms resulting from the consecutive multiplication of variables encoding the weights. Our algorithms efficiently and effectively search the parameters' space for safe weights by using iterative expansion and the network's gradient and can be used with any verification algorithm of choice for BNNs. In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that our algorithms compute bounds up to 40% tighter than the SoA.
- State-of-the-art in artificial neural network applications: A survey. Heliyon, 4(11):e00938.
- BNN-DP: robustness certification of bayesian neural networks via dynamic programming. In International conference on machine learning, ICML, pages 1613–1622. PMLR.
- Medical image analysis using convolutional neural networks: a review. Journal of medical systems, 42(11):1–13.
- Make sure you’re unsure: A framework for verifying probabilistic specifications. In Advances in Neural Information Processing Systems, NeurIPS, pages 11136–11147.
- Weight uncertainty in neural network. In International conference on machine learning, ICML, pages 1613–1622. PMLR.
- End to end learning for self-driving cars.
- On the robustness of bayesian neural networks to adversarial attacks.
- Efficient verification of relu-based neural networks via dependency analysis. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI.
- Robustness of bayesian neural networks to gradient-based attacks. In Advances in Neural Information Processing Systems, NeurIPS.
- Statistical guarantees for the robustness of bayesian neural networks. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI.
- Robustness guarantees for bayesian inference with gaussian processes. In The Thirty-Third Conference on Artificial Intelligence, AAAI.
- Adversarial classification. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining KDD, pages 99–108. ACM.
- VerifAI: A Toolkit for the Formal Design and Analysis of Artificial Intelligence-Based Systems. In Computer Aided Verification CAV, volume 11561 of Lecture Notes in Computer Science, pages 432–442. Springer.
- Domain-adversarial training of neural networks. In Domain Adaptation in Computer Vision Applications, Advances in Computer Vision and Pattern Recognition. JMLR. org.
- Explaining and harnessing adversarial examples. In International Conference on Learning Representations, ICLR. OpenReview.net.
- On the effectiveness of interval bound propagation for training verifiably robust models.
- Hastings, W. K. (1970). Monte carlo sampling methods using markov chains and their applications. Biometrika, 57:97–109.
- Efficient neural network verification via adaptive refinement and adversarial search. In 24th European Conference on Artificial Intelligence, ECAI.
- Bounded and unbounded verification of rnn-based agents in non-deterministic environments. In Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS, pages 2382–2384. IFAAMAS.
- Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems, NeurIPS.
- Hands-on bayesian neural networks—a tutorial for deep learning users. IEEE Computational Intelligence Magazine, 17(2):29–48.
- Reluplex: An efficient SMT solver for verifying deep neural networks. In Computer Aided Verification - 29th International Conference, CAV, volume 10426 of Lecture Notes in Computer Science, pages 97–117. Springer.
- Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR.
- Towards scalable complete verification of relu neural networks via dependency-based branching. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI.
- Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Technical report, University of Toronto.
- Infinite time horizon safety of bayesian neural networks. In Advances in Neural Information Processing Systems, volume 34, pages 10171–10185. Curran Associates, Inc.
- MNIST handwritten digit database.
- Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization, 4(3-4):244–404.
- Robust target recognition and tracking of self-driving cars with radar and camera information fusion under severe weather conditions. IEEE Transactions on Intelligent Transportation Systems.
- An approach to reachability analysis for feed-forward relu neural networks.
- MacKay, D. J. (1992). A practical bayesian framework for backpropagation networks. Neural computation, 4(3):448–472.
- Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, ICLR. OpenReview.net.
- Evaluating the robustness of bayesian neural networks against different types of attacks.
- An abstraction-refinement approach to verification of artificial neural networks. In Computer Aided Verification, 22nd International Conference, CAV, volume 6174 of Lecture Notes in Computer Science, pages 243–257. Springer.
- Markov chain monte carlo and variational inference: Bridging the gap. In International conference on machine learning, ICML, pages 1218–1226. PMLR.
- Adversarial training for free! In Advances in Neural Information Processing Systems, NeurIPS.
- Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540. ACM.
- Fast and effective robustness certification. In Advances in Neural Information Processing Systems, NeurIPS.
- An abstract domain for certifying neural networks. Proc. ACM Program. Lang., 3(POPL):41:1–41:30.
- Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations, ICLR.
- Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR.
- Evaluating robustness of neural networks with mixed integer programming. In International Conference on Learning Representations, ICLR. OpenReview.net.
- Ensemble adversarial training: Attacks and defenses. In 6th International Conference on Learning Representations, ICLR. OpenReview.net.
- Robustness of bayesian neural networks to white-box adversarial attacks. In Fourth IEEE International Conference on Artificial Intelligence and Knowledge Engineering, AIKE.
- Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium, USENIX.
- Probabilistic safety for bayesian neural networks. In Proceedings of the Thirty-Sixth Conference on Uncertainty in Artificial Intelligence, UAI.
- Adversarial robustness certification for bayesian neural networks.
- Robustness guarantees for credal bayesian networks via constraint relaxation over probabilistic circuits. In International Joint Conference on Artificial Intelligence, IJCAI, pages 4885–4892.
- Using Z3 for formal modeling and verification of FNN global robustness. In The 35th International Conference on Software Engineering and Knowledge Engineering, SEKE, pages 110–113. KSI Research Inc.