Papers
Topics
Authors
Recent
Search
2000 character limit reached

Scaling Model Checking for DNN Analysis via State-Space Reduction and Input Segmentation (Extended Version)

Published 29 Jun 2023 in cs.LG | (2306.17323v2)

Abstract: Owing to their remarkable learning capabilities and performance in real-world applications, the use of machine learning systems based on Neural Networks (NNs) has been continuously increasing. However, various case studies and empirical findings in the literature suggest that slight variations to NN inputs can lead to erroneous and undesirable NN behavior. This has led to considerable interest in their formal analysis, aiming to provide guarantees regarding a given NN's behavior. Existing frameworks provide robustness and/or safety guarantees for the trained NNs, using satisfiability solving and linear programming. We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties. However, the state-space explosion associated with model checking entails a scalability problem, making the FANNet applicable only to small NNs. This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis. Compared to the state-of-the-art FANNet, this enables our new model checking-based framework to reduce the verification's timing overhead by a factor of up to 8000, making the framework applicable to NNs even with approximately $80$ times more network parameters. This in turn allows the analysis of NN safety properties using the new framework, in addition to all the NN properties already included with FANNet. The framework is shown to be efficiently able to analyze properties of NNs trained on healthcare datasets as well as the well--acknowledged ACAS Xu NNs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (73)
  1. Adversarial example detection for DNN models: A review and experimental comparison. Artificial Intelligence Review (2022), 1–60.
  2. DeepAbstract: neural network abstraction for accelerating verification. In Proc. ATVA. Springer, 92–107.
  3. Sikha Bagui and Kunqi Li. 2021. Resampling imbalanced data for network intrusion detection datasets. Journal of Big Data 8, 1 (2021), 1–41.
  4. Christel Baier and Joost-Pieter Katoen. 2008. Principles of model checking. MIT press.
  5. Pedro RAS Bassi and Romis Attux. 2021. A deep convolutional neural network for COVID-19 detection using chest X-rays. Res. on Biomed. Engineering (2021), 1–10.
  6. Measuring neural net robustness with constraints. In Proc. NeurIPS. 2613–2621.
  7. Bounded Model Checking. Handbook of satisfiability 185, 99 (2009), 457–481.
  8. Efficient Verification of ReLU-based Neural Networks via Dependency Analysis. In Proc. AAAI. 3291–3299.
  9. Branch and bound for piecewise linear neural network verification. JMLR 21, 2020 (2020).
  10. A unified view of piecewise linear neural network verification. In Proc. NeurIPS. 4790–4799.
  11. Explicit-state and symbolic model checking of nuclear I&C systems: A comparison. In Proc. IECON. IEEE, 5439–5446.
  12. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proc. S&P. IEEE, 39–57.
  13. The nuXmv Symbolic Model Checker. In Proc. CAV. 334–342.
  14. Verification of Binarized Neural Networks via Inter-Neuron Factoring. In Proc. VSTTE. Springer, 279–290.
  15. Maximum resilience of artificial neural networks. In Proc. ATVA. Springer, 251–268.
  16. Counterexample-guided abstraction refinement for symbolic model checking. JACM 50, 5 (2003), 752–794.
  17. Symbolic model checking for asynchronous boolean programs. In Proc. SPIN. Springer, 75–90.
  18. Martin Davis and Hilary Putnam. 1960. A computing procedure for quantification theory. JACM 7, 3 (1960), 201–215.
  19. Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
  20. Output range analysis for deep feedforward neural networks. In Proc. NFM. Springer, 121–138.
  21. Ruediger Ehlers. 2017. Formal verification of piece-wise linear feed-forward neural networks. In Proc. ATVA. Springer, 269–286.
  22. Cindy Eisner and Doron Peled. 2002. Comparing symbolic and explicit model checking of a software system. In Proc. SPIN. Springer, 230–239.
  23. An abstraction-based framework for neural network verification. In Proc. CAV. Springer, 43–65.
  24. A guide to deep learning in healthcare. Nat. Medicine 25, 1 (2019), 24.
  25. Deep Learning-Based Multi-scale Multi-object Detection and Classification for Autonomous Driving. In Fahrerassistenzsysteme. Springer, 233–242.
  26. AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation. In Proc. SP. IEEE, 3–18.
  27. Multimodal neurons in artificial neural networks. Distill 6, 3 (2021), e30.
  28. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. science 286, 5439 (1999), 531–537.
  29. DeepSafe: A data-driven approach for checking adversarial robustness in neural networks. arXiv preprint arXiv:1710.00486 (2017), 1–17.
  30. Deep neural networks for acoustic modeling in speech recognition. Signal Process. magazine 29, 6 (2012), 82–97.
  31. Gerard J Holzmann. 2018. Explicit-state model checking. In Handbook of Model Checking. Springer, 153–171.
  32. Safety verification of deep neural networks. In Proc. CAV. Springer, 3–29.
  33. Binarized Neural Networks. In Proc. NeurIPS, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Eds.), Vol. 29. Curran Associates, Inc., 1–9.
  34. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proc. CAV. Springer, 97–117.
  35. The marabou framework for verification and analysis of deep neural networks. In Proc. CAV. Springer, 443–452.
  36. FaDec: A Fast Decision-based Attack for Adversarial Machine Learning. In Proc. IJCNN. IEEE, 1–8.
  37. TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. In Proc. IOLTS. IEEE/ACM, 188–193.
  38. Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks. arXiv preprint arXiv:2105.03251 (2021).
  39. A novel fractional gradient-based learning algorithm for recurrent neural networks. CSSP 37, 2 (2018), 593–612.
  40. Compressing transitions for model checking. In Proc. CAV. Springer, 569–582.
  41. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. JMLR 18, 1 (2017), 559–563.
  42. Training Data Poisoning in ML-CAD: Backdooring DL-based Lithographic Hotspot Detectors. Proc. TCAD 40, 6 (2020), 1244–1257.
  43. Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward ReLU neural networks. arXiv preprint arXiv:1706.07351 (2017), 1–10.
  44. Kenneth L McMillan. 1993. Symbolic model checking. In Symbolic Model Checking. Springer, 25–60.
  45. R. Milner. 1982. A Calculus of Communicating Systems. Springer-Verlag, Berlin, Heidelberg.
  46. Universal adversarial perturbations. In Proc. CVPR. 1765–1773.
  47. Verifying Properties of Binarized Deep Neural Networks. In Proc. AAAI. 6615–6624.
  48. FANNet: Formal Analysis of noise tolerance, training bias and input sensitivity in Neural Networks. In Proc. DATE. IEEE, 666–669.
  49. Technical report on the cleverhans v2. 1.0 adversarial examples library. arXiv preprint arXiv:1610.00768 (2016).
  50. Deepxplore: Automated whitebox testing of deep learning systems. In Proc. SOSP. ACM, 1–18.
  51. Luca Pulina and Armando Tacchella. 2010. An abstraction-refinement approach to verification of artificial neural networks. In Proc. CAV. Springer, 243–257.
  52. Luca Pulina and Armando Tacchella. 2012. Challenging SMT solvers to verify neural networks. AI Communications 25, 2 (2012), 117–135.
  53. Daniel Richardson. 1969. Some undecidable problems involving elementary functions of a real variable. J. of Symbolic Log. 33, 4 (1969), 514–520.
  54. Towards Verification of Artificial Neural Networks. In Proc. MBMV. 30–40.
  55. Verifying binarized neural networks by local automaton learning. In Proc. AAAI Spring Symposium on VNN. 1–8.
  56. Reducing dnn properties to enable falsification with adversarial attacks. In Proc. ICSE. IEEE, 275–287.
  57. Detecting Cyber Attacks Using Anomaly Detection with Explanations and Expert Feedback. In Proc. ICASSP. IEEE, 2872–2876.
  58. João P Marques Silva and Karem A Sakallah. 2003. GRASP—a new search algorithm for satisfiability. In The Best of ICCAD. Springer, 73–89.
  59. Fast and Effective Robustness Certification. In Proc. NeurIPS. 10802–10813.
  60. An Abstract Domain for Certifying Neural Networks. PACMPL 3, POPL (2019), 41.
  61. Efficient processing of deep neural networks: A tutorial and survey. Proc. of the IEEE 105, 12 (2017), 2295–2329.
  62. Intriguing properties of neural networks. preprint arXiv:1312.6199 (2013), 1–10.
  63. Evaluating robustness of neural networks with mixed integer programming. In Proc. ICLR. 1–21.
  64. Verification of Deep Convolutional Neural Networks Using ImageStars. In Proc. CAV. 18 – 42.
  65. NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems. In Proc. CAV. Springer, 3–17.
  66. Efficient formal safety analysis of neural networks. In Proc. NeurIPS. 6367–6377.
  67. Formal security analysis of neural networks using symbolic intervals. In Proc. USENIX Security Symposium. 1599–1614.
  68. Wayne L Winston and Jeffrey B Goldberg. 2004. Operations research: applications and algorithms. Vol. 3. Thomson/Brooks/Cole Belmont^ eCalif Calif.
  69. Parallelization Techniques for Verifying Neural Networks. In Proc. FMCAD. 128–137.
  70. Reachable Set Estimation for Neural Network Control Systems: A Simulation-Guided Approach. TNNLS (2020), 1–10.
  71. Output reachable set estimation and verification for multilayer neural networks. TNNLS 29, 11 (2018), 5777–5783.
  72. Radosiaw R Zakrzewski. 2001. Verification of a trained neural network accuracy. In Proc. IJCNN, Vol. 3. IEEE, 1657–1662.
  73. Low resolution face recognition using a two-branch deep convolutional neural network architecture. Expert Syst. with Applications 139 (2020), 112854.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.