Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A DPLL(T) Framework for Verifying Deep Neural Networks (2307.10266v3)

Published 17 Jul 2023 in cs.LG, cs.LO, and cs.SE

Abstract: Deep Neural Networks (DNNs) have emerged as an effective approach to tackling real-world problems. However, like human-written software, DNNs can have bugs and can be attacked. To address this, research has explored a wide-range of algorithmic approaches to verify DNN behavior. In this work, we introduce NeuralSAT, a new verification approach that adapts the widely-used DPLL(T) algorithm used in modern SMT solvers. A key feature of SMT solvers is the use of conflict clause learning and search restart to scale verification. Unlike prior DNN verification approaches, NeuralSAT combines an abstraction-based deductive theory solver with clause learning and an evaluation clearly demonstrates the benefits of the approach on a set of challenging verification benchmarks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (87)
  1. Boosting the Performance of CDCL-Based SAT Solvers by Exploiting Backbones and Backdoors. Algorithms 15, 9 (2022), 302.
  2. Efficient generation of unsatisfiability proofs and cores in SAT. In International Conference on Logic for Programming Artificial Intelligence and Reasoning. Springer, 16–30.
  3. ONNX Open neural network exchange. https://www.onnx.ai/
  4. Stanley Bak. 2021. nnenum: Verification of relu neural networks with optimized abstraction refinement. In NASA Formal Methods Symposium. Springer, 19–36.
  5. The Second International verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results. arXiv preprint arXiv:2109.00498 (2021).
  6. Improved geometric path enumeration for verifying relu neural networks. In International Conference on Computer Aided Verification. Springer, 66–96.
  7. Cvc4. In International Conference on Computer Aided Verification. Springer, 171–177.
  8. Splitting on demand in SAT modulo theories. In Logic for Programming, Artificial Intelligence, and Reasoning: 13th International Conference, LPAR 2006, Phnom Penh, Cambodia, November 13-17, 2006. Proceedings 13. Springer, 512–526.
  9. Clark W Barrett. 2013. ” Decision Procedures: An Algorithmic Point of View,” by Daniel Kroening and Ofer Strichman, Springer-Verlag, 2008. J. Autom. Reason. 51, 4 (2013), 453–456.
  10. Roberto J Bayardo Jr and Robert Schrag. 1997. Using CSP look-back techniques to solve real-world SAT instances. In Aaai/iaai. Providence, RI, 203–208.
  11. Handbook of satisfiability. Vol. 185. IOS press.
  12. First three years of the international verification of neural networks competition (VNN-COMP). International Journal on Software Tools for Technology Transfer (2023), 1–11.
  13. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research 21, 2020 (2020).
  14. A unified view of piecewise linear neural network verification. Advances in Neural Information Processing Systems 31 (2018).
  15. Better decision heuristics in CDCL through local search and target phases. Journal of Artificial Intelligence Research 74 (2022), 1515–1563.
  16. Linearity grafting: Relaxed neuron pruning helps certifiable robustness. In International Conference on Machine Learning. PMLR, 3760–3772.
  17. John W Chinneck and Erik W Dravnieks. 1991. Locating minimal infeasible constraint sets in linear programs. ORSA Journal on Computing 3, 2 (1991), 157–168.
  18. Stephen A Cook. 1971. The complexity of theorem-proving procedures. In Proceedings of the third annual ACM symposium on Theory of computing. 151–158.
  19. Patrick Cousot and Radhia Cousot. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. 238–252.
  20. Fast Falsification of Neural Networks using Property Directed Testing. arXiv preprint arXiv:2104.12418 (2021).
  21. A machine program for theorem-proving. Commun. ACM 5, 7 (1962), 394–397.
  22. Improved branch and bound for neural network verification via lagrangian decomposition. arXiv preprint arXiv:2104.06718 (2021).
  23. Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis. Springer, 269–286.
  24. Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1625–1634.
  25. Complete verification via multi-neuron relaxation guided branch-and-bound. arXiv preprint arXiv:2205.00263 (2022).
  26. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP). IEEE, 3–18.
  27. Boosting combinatorial search through randomization. AAAI/IAAI 98 (1998), 431–437.
  28. Deep Learning. MIT Press. https://www.deeplearningbook.org, last accessed January 19, 2024.
  29. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
  30. Static analysis of relu neural networks with tropical polyhedra. In International Static Analysis Symposium. Springer, 166–190.
  31. A heuristic restart strategy to speed up the solving of satisfiability problem. In 2012 Fifth International Symposium on Computational Intelligence and Design, Vol. 2. IEEE, 423–426.
  32. Gurobi Optimization, LLC. 2022. Gurobi Optimizer Reference Manual. https://www.gurobi.com
  33. Max Plus at work: modeling and analysis of synchronized systems: a course on Max-Plus algebra and its applications. Vol. 13. Princeton University Press.
  34. Patrick Henriksen and Alessio Lomuscio. 2020. Efficient neural network verification via adaptive refinement and adversarial search. In ECAI 2020. IOS Press, 2513–2520.
  35. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review 37 (2020), 100270.
  36. Safety verification of deep neural networks. In International conference on computer aided verification. Springer, 3–29.
  37. Neural Network Verification with Proof Production. Proc. 22nd Int. Conf. on Formal Methods in Computer-Aided Design (FMCAD) (2022).
  38. Alcoa: the alloy constraint analyzer. In Proceedings of the 22nd international conference on Software engineering. 730–733.
  39. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification. Springer, 97–117.
  40. Towards proving the adversarial robustness of deep neural networks. Proc. 1st Workshop on Formal Verification of Autonomous Vehicles (FVAV), pp. 19-26 (2017).
  41. Reluplex: a calculus for reasoning about deep neural networks. Formal Methods in System Design 60, 1 (2022), 87–116.
  42. The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification. Springer, 443–452.
  43. Next-generation airborne collision avoidance system. Technical Report. Massachusetts Institute of Technology-Lincoln Laboratory Lexington United States.
  44. Daniel Kroening and Ofer Strichman. 2016. Decision procedures. Springer.
  45. PaInleSS: a framework for parallel SAT solving. In Theory and Applications of Satisfiability Testing–SAT 2017: 20th International Conference, Melbourne, VIC, Australia, August 28–September 1, 2017, Proceedings 20. Springer, 233–250.
  46. Modular and efficient divide-and-conquer sat solver on top of the painless framework. In Tools and Algorithms for the Construction and Analysis of Systems: 25th International Conference, TACAS 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6–11, 2019, Proceedings, Part I 25. Springer, 135–151.
  47. Machine learning-based restart policy for CDCL SAT solvers. In Theory and Applications of Satisfiability Testing–SAT 2018: 21st International Conference, SAT 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 9–12, 2018, Proceedings 21. Springer, 94–110.
  48. Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization 4, 3-4 (2021), 244–404.
  49. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
  50. J.P. Marques Silva and K.A. Sakallah. 1996. GRASP-A new search algorithm for satisfiability. In Proceedings of International Conference on Computer Aided Design. 220–227. https://doi.org/10.1109/ICCAD.1996.569607
  51. Joao P Marques-Silva and Karem A Sakallah. 1999. GRASP: A search algorithm for propositional satisfiability. IEEE Trans. Comput. 48, 5 (1999), 506–521.
  52. Chaff: Engineering an efficient SAT solver. In Proceedings of the 38th annual Design Automation Conference. 530–535.
  53. Leonardo de Moura and Nikolaj Bjørner. 2008. Z3: An efficient SMT solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 337–340.
  54. Scaling polyhedral neural network verification on gpus. Proceedings of Machine Learning and Systems 3 (2021), 733–746.
  55. The Third International Verification of Neural Networks Competition (VNN-COMP 2022): Summary and Results. arXiv preprint arXiv:2212.10376 (2022).
  56. Solving SAT and SAT modulo theories: From an abstract Davis–Putnam–Logemann–Loveland procedure to DPLL (T). Journal of the ACM (JACM) 53, 6 (2006), 937–977.
  57. OVAL-group. 2023. OVAL - Branch-and-Bound-based Neural Network Verification. https://github.com/oval-group/oval-bab.
  58. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32 (2019).
  59. Knot Pipatsrisawat and Adnan Darwiche. 2009. On the power of clause-learning SAT solvers with restarts. In International Conference on Principles and Practice of Constraint Programming. Springer, 654–668.
  60. Adversarial Attacks and Defenses in Deep Learning. Engineering 6, 3 (mar 2020), 346–360.
  61. DNNV: A framework for deep neural network verification. In International Conference on Computer Aided Verification. Springer, 137–150.
  62. Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems 32 (2019).
  63. Fast and effective robustness certification. Advances in neural information processing systems 31 (2018).
  64. Boosting robustness certification of neural networks. In International Conference on Learning Representations.
  65. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1–30.
  66. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014.
  67. The international benchmarks standard for the Verification of Neural Networks. https://www.vnnlib.org/
  68. Verification of piecewise deep neural networks: a star set approach with zonotope pre-filter. Formal Aspects of Computing 33 (2021), 519–545.
  69. Robustness verification of semantic segmentation neural networks using relaxed reachability. In International Conference on Computer Aided Verification. Springer, 263–286.
  70. Caterina Urban and Antoine Miné. 2021. A review of formal methods applied to machine learning. arXiv preprint arXiv:2104.02466 (2021).
  71. Efficient formal safety analysis of neural networks. Advances in Neural Information Processing Systems 31 (2018).
  72. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18). 1599–1614.
  73. Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. Advances in Neural Information Processing Systems 34 (2021), 29909–29921.
  74. Parallelization techniques for verifying neural networks, Vol. 1. TU Wien Academic Press, 128–137.
  75. Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https://openreview.net/forum?id=BJfIVjAcKm
  76. Systematic generation of diverse benchmarks for dnn verification. In International Conference on Computer Aided Verification. Springer, 97–121.
  77. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. arXiv preprint arXiv:2011.13824 (2020).
  78. Natural Attack for Pre-trained Models of Code. Technical Track of ICSE 2022 (2022).
  79. Derivative-free optimization via classification. In Thirtieth AAAI Conference on Artificial Intelligence.
  80. General cutting planes for bound-propagation-based neural network verification. arXiv preprint arXiv:2208.05740 (2022).
  81. Efficient neural network robustness certification with general activation functions. Advances in neural information processing systems 31 (2018).
  82. Lintao Zhang and Sharad Malik. 2003a. Extracting small unsatisfiable cores from unsatisfiable boolean formula. SAT 3 (2003).
  83. Lintao Zhang and Sharad Malik. 2003b. Validating SAT solvers using an independent resolution-based checker: Practical implementations and other applications. In 2003 Design, Automation and Test in Europe Conference and Exhibition. IEEE, 880–885.
  84. An empirical study of common challenges in developing deep learning applications. In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE). IEEE, 104–115.
  85. Can Pruning Improve Certified Robustness of Neural Networks? Transactions on Machine Learning Research (2022).
  86. FLACK: Counterexample-guided fault localization for alloy models. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 637–648.
  87. Adversarial Attacks on Neural Networks for Graph Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Vol. 2019-Augus. ACM, New York, NY, USA, 2847–2856.
Citations (4)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com