The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks (2301.07068v4)
Abstract: Deep Neural Networks are increasingly adopted in critical tasks that require a high level of safety, e.g., autonomous driving. While state-of-the-art verifiers can be employed to check whether a DNN is unsafe w.r.t. some given property (i.e., whether there is at least one unsafe input configuration), their yes/no output is not informative enough for other purposes, such as shielding, model selection, or training improvements. In this paper, we introduce the #DNN-Verification problem, which involves counting the number of input configurations of a DNN that result in a violation of a particular safety property. We analyze the complexity of this problem and propose a novel approach that returns the exact count of violations. Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements. We present experimental results on a set of safety-critical benchmarks that demonstrate the effectiveness of our approximate method and evaluate the tightness of the bound.
- Verifying learning-based robotic navigation systems. In 29th International Conference, TACAS 2023, pages 607–627. Springer, 2023.
- Quantitative verification of neural networks and its security applications. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019.
- A unified view of piecewise linear neural network verification. Advances in Neural Information Processing Systems, 31, 2018.
- Formal verification of neural networks for safety-critical tasks in deep reinforcement learning. In Uncertainty in Artificial Intelligence, pages 333–343. PMLR, 2021.
- Justicia: A stochastic sat approach to formally verify fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 7554–7563, 2021.
- From sampling to model counting. In IJCAI, volume 2007, pages 2293–2299, 2007.
- Model counting. In Handbook of satisfiability, pages 993–1014. IOS press, 2021.
- Reluplex: An efficient smt solver for verifying deep neural networks. In International conference on computer aided verification, pages 97–117. Springer, 2017.
- The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification, 2019.
- Reluplex: a calculus for reasoning about deep neural networks. Formal Methods in System Design, pages 1–30, 2021.
- Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization, 4(3-4):244–404, 2021.
- An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351, 2017.
- Benchmarking safe deep reinforcement learning in aquatic navigation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5590–5595, 2021.
- Exploring safer behaviors for deep reinforcement learning. In Association for the Advancement of Artificial Intelligence (AAAI), 2022.
- Curriculum learning for safe mapless navigation. In Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, pages 766–769, 2022.
- Online safety property collection and refinement for safe deep reinforcement learning in mapless navigation. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.
- Introduction to interval analysis. SIAM, 2009.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Evaluating robustness of neural networks with mixed integer programming. In International Conference on Learning Representations, 2018.
- Leslie G Valiant. The complexity of enumeration and reliability problems. SIAM Journal on Computing, 8(3):410–421, 1979.
- Efficient formal safety analysis of neural networks. Advances in Neural Information Processing Systems, 31, 2018.
- Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems, 34:29909–29921, 2021.
- Bdd4bnn: a bdd-based quantitative analysis framework for binarized neural networks. In Computer Aided Verification: 33rd International Conference, CAV 2021, Virtual Event, July 20–23, 2021, Proceedings, Part I 33, pages 175–200. Springer, 2021.