Robustness Verifcation in Neural Networks (2403.13441v1)
Abstract: In this paper we investigate formal verification problems for Neural Network computations. Of central importance will be various robustness and minimization problems such as: Given symbolic specifications of allowed inputs and outputs in form of Linear Programming instances, one question is whether there do exist valid inputs such that the network computes a valid output? And does this property hold for all valid inputs? Do two given networks compute the same function? Is there a smaller network computing the same function? The complexity of these questions have been investigated recently from a practical point of view and approximated by heuristic algorithms. We complement these achievements by giving a theoretical framework that enables us to interchange security and efficiency questions in neural networks and analyze their computational complexities. We show that the problems are conquerable in a semi-linear setting, meaning that for piecewise linear activation functions and when the sum- or maximum metric is used, most of them are in P or in NP at most.
- Aws Albarghouthi. Introduction to Neural Network Verification. 2021.
- Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, 2018.
- Ovidiu Calin. Deep Learning Architectures - A Mathematical Approach. Springer, 2020.
- The secret sharer: Evaluating and testing unintended memorization in neural networks, 2019.
- Neural Network Robustness as a Verification Property: A Principled Case Study. 2021.
- Classification-based financial markets prediction using deep neural networks. Algorithmic Finance, 6:67–77, 2017.
- A formalization of robustness for deep neural networks, 2019.
- A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37:362–386, 2019.
- Occrob: Efficient smt-based occlusion robustness verification of deep neural networks, 2023.
- Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process, 29:82–97, 2012.
- A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37, 2020.
- A unifying approach to temporal constraint reasoning. Artificial Intelligence, 102(1):143–155, 1998.
- Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. Computer Aided Verification, 10426:97–117, 2017.
- A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 53:5455–5516, 2020.
- ImageNet Classification with Deep Convolutional Neural Networks. Association for Computing Machinery, 2017.
- A survey on deep learning in medical image analysis. Medical Image Analysis, 42:60–88, 2017.
- Can adversarially robust learning leverage computational hardness?, 2018.
- Reachability Analysis of Deep Neural Networks with Provable Guarantees. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI, pages 2651–2659, 2018.
- Reachability is NP-Complete Even for the Simplest Neural Networks. International Conference on Reachability Problems, 13035:149–164, 2021.
- Adversarial risk and the dangers of evaluating against weak attacks, 2018.
- Adrian Wurm. Complexity of Reachability Problems in Neural Networks. International Conference on Reachability Problems, 2023.