Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming (2306.10742v1)

Published 19 Jun 2023 in cs.LG and stat.ML

Abstract: In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a compact set of input points $T\subset \mathbb{R}n$, BNN-DP computes lower and upper bounds on the BNN's predictions for all the points in $T$. The framework is based on an interpretation of BNNs as stochastic dynamical systems, which enables the use of Dynamic Programming (DP) algorithms to bound the prediction range along the layers of the network. Specifically, the method uses bound propagation techniques and convex relaxations to derive a backward recursion procedure to over-approximate the prediction range of the BNN with piecewise affine functions. The algorithm is general and can handle both regression and classification tasks. On a set of experiments on various regression and classification tasks and BNN architectures, we show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Individual fairness guarantees for neural networks. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pp.  651–658. International Joint Conferences on Artificial Intelligence Organization, 2022. doi: 10.24963/ijcai.2022/92.
  2. Make sure you’re unsure: A framework for verifying probabilistic specifications. Advances in Neural Information Processing Systems, 34:11136–11147, 2021.
  3. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, 2018.
  4. Weight uncertainty in neural network. pp.  1613–1622, 2015.
  5. Lagrangian decomposition for neural network verification. In Conference on Uncertainty in Artificial Intelligence, pp. 370–379. PMLR, 2020.
  6. Robustness of bayesian neural networks to gradient-based attacks. Advances in Neural Information Processing Systems, 33:15602–15613, 2020.
  7. Statistical guarantees for the robustness of Bayesian neural networks. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp.  5693–5700. AAAI Press, 2019a.
  8. Robustness guarantees for Bayesian inference with Gaussian processes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp.  7759–7768, 2019b.
  9. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2018.
  10. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. PMLR, 2016.
  11. Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR), 2015.
  12. Harva, M. et al. Hierarchical variance models of image sequences. PhD thesis, Master’s thesis, Helsinki University of Technology, Espoo, 2004.
  13. Probabilistic backpropagation for scalable learning of bayesian neural networks. In International conference on machine learning, pp. 1861–1869. PMLR, 2015.
  14. Uncertainty-aware reinforcement learning for collision avoidance. arXiv preprint arXiv:1702.01182, 2017.
  15. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp.  97–117. Springer, 2017.
  16. Fast and scalable bayesian deep learning by weight-perturbation in adam. In International Conference on Machine Learning, pp. 2611–2620. PMLR, 2018.
  17. Infinite time horizon safety of bayesian neural networks. Advances in Neural Information Processing Systems, 34, 2021.
  18. Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization, 4(3-4):244–404, 2021.
  19. MacKay, D. J. A practical bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
  20. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
  21. Training deep residual networks for uniform approximation guarantees. In Learning for Dynamics and Control, pp.  677–688. PMLR, 2021.
  22. Concrete problems for autonomous vehicle safety: Advantages of bayesian deep learning. International Joint Conferences on Artificial Intelligence, Inc., 2017.
  23. Neal, R. M. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012.
  24. Practical deep learning with bayesian principles. Advances in neural information processing systems, 32, 2019.
  25. Adversarial robustness guarantees for gaussian processes. Journal of Machine Learning Research, 23:1–55, 2022.
  26. Adversarial phenomenon in the eyes of bayesian deep learning. arXiv preprint arXiv:1711.08244, 2017.
  27. Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533, 2018.
  28. Adversarial vulnerability bounds for gaussian process classification. Machine Learning, pp.  1–39, 2022.
  29. The rectified gaussian distribution. Advances in neural information processing systems, 10, 1997.
  30. Towards fast computation of certified robustness for ReLU networks. In International Conference on Machine Learning (ICML), 2018.
  31. Probabilistic safety for Bayesian neural networks. Uncertainty in Artificial Intelligence (UAI), 2020.
  32. Bayesian inference with certifiable adversarial robustness. AISTATS, 2021.
  33. Variational message passing. Journal of Machine Learning Research, 6(4), 2005.
  34. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, pp. 5286–5295. PMLR, 2018.
  35. Noisy natural gradient as variational inference. In International Conference on Machine Learning, pp. 5852–5861. PMLR, 2018a.
  36. Efficient neural network robustness certification with general activation functions. arXiv preprint arXiv:1811.00866, 2018b.
Citations (6)

Summary

We haven't generated a summary for this paper yet.