Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural Networks (2301.07068v4)

Published 17 Jan 2023 in cs.AI and cs.LG

Abstract: Deep Neural Networks are increasingly adopted in critical tasks that require a high level of safety, e.g., autonomous driving. While state-of-the-art verifiers can be employed to check whether a DNN is unsafe w.r.t. some given property (i.e., whether there is at least one unsafe input configuration), their yes/no output is not informative enough for other purposes, such as shielding, model selection, or training improvements. In this paper, we introduce the #DNN-Verification problem, which involves counting the number of input configurations of a DNN that result in a violation of a particular safety property. We analyze the complexity of this problem and propose a novel approach that returns the exact count of violations. Due to the #P-completeness of the problem, we also propose a randomized, approximate method that provides a provable probabilistic bound of the correct count while significantly reducing computational requirements. We present experimental results on a set of safety-critical benchmarks that demonstrate the effectiveness of our approximate method and evaluate the tightness of the bound.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. Verifying learning-based robotic navigation systems. In 29th International Conference, TACAS 2023, pages 607–627. Springer, 2023.
  2. Quantitative verification of neural networks and its security applications. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019.
  3. A unified view of piecewise linear neural network verification. Advances in Neural Information Processing Systems, 31, 2018.
  4. Formal verification of neural networks for safety-critical tasks in deep reinforcement learning. In Uncertainty in Artificial Intelligence, pages 333–343. PMLR, 2021.
  5. Justicia: A stochastic sat approach to formally verify fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 7554–7563, 2021.
  6. From sampling to model counting. In IJCAI, volume 2007, pages 2293–2299, 2007.
  7. Model counting. In Handbook of satisfiability, pages 993–1014. IOS press, 2021.
  8. Reluplex: An efficient smt solver for verifying deep neural networks. In International conference on computer aided verification, pages 97–117. Springer, 2017.
  9. The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification, 2019.
  10. Reluplex: a calculus for reasoning about deep neural networks. Formal Methods in System Design, pages 1–30, 2021.
  11. Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization, 4(3-4):244–404, 2021.
  12. An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351, 2017.
  13. Benchmarking safe deep reinforcement learning in aquatic navigation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5590–5595, 2021.
  14. Exploring safer behaviors for deep reinforcement learning. In Association for the Advancement of Artificial Intelligence (AAAI), 2022.
  15. Curriculum learning for safe mapless navigation. In Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, pages 766–769, 2022.
  16. Online safety property collection and refinement for safe deep reinforcement learning in mapless navigation. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.
  17. Introduction to interval analysis. SIAM, 2009.
  18. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  19. Evaluating robustness of neural networks with mixed integer programming. In International Conference on Learning Representations, 2018.
  20. Leslie G Valiant. The complexity of enumeration and reliability problems. SIAM Journal on Computing, 8(3):410–421, 1979.
  21. Efficient formal safety analysis of neural networks. Advances in Neural Information Processing Systems, 31, 2018.
  22. Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems, 34:29909–29921, 2021.
  23. Bdd4bnn: a bdd-based quantitative analysis framework for binarized neural networks. In Computer Aided Verification: 33rd International Conference, CAV 2021, Virtual Event, July 20–23, 2021, Proceedings, Part I 33, pages 175–200. Springer, 2021.
Citations (14)

Summary

We haven't generated a summary for this paper yet.