Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees (2312.09748v2)

Published 15 Dec 2023 in cs.LG and cs.SE

Abstract: Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (27)
  1. Deepfool: a simple and accurate method to fool deep neural networks. In Computer Vision and Pattern Recognition (CVPR), pages 2574–2582. IEEE, 2016.
  2. Adversarial examples in the physical world. In Artificial intelligence safety and security, pages 99–112. Chapman and Hall/CRC, 2018.
  3. Adversarial attack and defense: A survey. Electronics, 11(8):1283, 2022.
  4. A survey of safety and trustworthiness of deep neural networks. arXiv preprint arXiv:1812.08342, 2018.
  5. Reluplex: An efficient smt solver for verifying deep neural networks. In Computer-Aided Verification (CAV), pages 97–117. Springer, 2017.
  6. Evaluating robustness of neural networks with mixed integer programming, 2017.
  7. Safety verification of deep neural networks. In Computer-Aided Verification (CAV), pages 3–29. Springer, 2017.
  8. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages (PACMPL), 3:1–30, 2019.
  9. The marabou framework for verification and analysis of deep neural networks. In Computer-Aided Verification (CAV), pages 443–452. Springer, 2019.
  10. Beyond the single neuron convex barrier for neural network certification. Advances in Neural Information Processing Systems, 32, 2019.
  11. Safedeep: A scalable robustness verification framework for deep neural networks. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE, 2023.
  12. Yann LeCun. The mnist database of handwritten digits. http://yann. lecun.com/exdb/mnist/, 1998.
  13. Ali Shoeb. CHB-MIT Scalp EEG Database, 2010.
  14. MIT-BIH Arrhythmia Database, 2000.
  15. Rosemary J Panelli. Sudep: a global perspective. Epilepsy & Behavior, 103:106417, 2020.
  16. Failure to detect life-threatening arrhythmias in icds using single-chamber detection criteria. Pacing and Clinical Electrophysiology, 42(6):583–594, 2019.
  17. Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion, 81:59–83, 2022.
  18. Nonnegative sparse pca. Advances in neural information processing systems, 19, 2006.
  19. Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023.
  20. e-glass: A wearable system for real-time detection of epileptic seizures. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1–5, 2018.
  21. Verification of neural networks: Enhancing scalability through pruning. arXiv preprint arXiv:2003.07636, 2020.
  22. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  23. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning (ICML), pages 5276–5285. PMLR, 2018.
  24. A unified view of piecewise linear neural network verification. Advances in Neural Information Processing Systems, 31, 2018.
  25. Defensive quantization: When efficiency meets robustness. arXiv preprint arXiv:1904.08444, 2019.
  26. Scalable verification of quantized neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3787–3795, 2021.
  27. Sietsma and Dow. Neural net pruning-why and how. In IEEE 1988 international conference on neural networks, pages 325–333. IEEE, 1988.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com