Papers
Topics
Authors
Recent
Search
2000 character limit reached

Robustness Verifcation in Neural Networks

Published 20 Mar 2024 in cs.AI and cs.LG | (2403.13441v1)

Abstract: In this paper we investigate formal verification problems for Neural Network computations. Of central importance will be various robustness and minimization problems such as: Given symbolic specifications of allowed inputs and outputs in form of Linear Programming instances, one question is whether there do exist valid inputs such that the network computes a valid output? And does this property hold for all valid inputs? Do two given networks compute the same function? Is there a smaller network computing the same function? The complexity of these questions have been investigated recently from a practical point of view and approximated by heuristic algorithms. We complement these achievements by giving a theoretical framework that enables us to interchange security and efficiency questions in neural networks and analyze their computational complexities. We show that the problems are conquerable in a semi-linear setting, meaning that for piecewise linear activation functions and when the sum- or maximum metric is used, most of them are in P or in NP at most.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. Aws Albarghouthi. Introduction to Neural Network Verification. 2021.
  2. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, 2018.
  3. Ovidiu Calin. Deep Learning Architectures - A Mathematical Approach. Springer, 2020.
  4. The secret sharer: Evaluating and testing unintended memorization in neural networks, 2019.
  5. Neural Network Robustness as a Verification Property: A Principled Case Study. 2021.
  6. Classification-based financial markets prediction using deep neural networks. Algorithmic Finance, 6:67–77, 2017.
  7. A formalization of robustness for deep neural networks, 2019.
  8. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 37:362–386, 2019.
  9. Occrob: Efficient smt-based occlusion robustness verification of deep neural networks, 2023.
  10. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process, 29:82–97, 2012.
  11. A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37, 2020.
  12. A unifying approach to temporal constraint reasoning. Artificial Intelligence, 102(1):143–155, 1998.
  13. Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks. Computer Aided Verification, 10426:97–117, 2017.
  14. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 53:5455–5516, 2020.
  15. ImageNet Classification with Deep Convolutional Neural Networks. Association for Computing Machinery, 2017.
  16. A survey on deep learning in medical image analysis. Medical Image Analysis, 42:60–88, 2017.
  17. Can adversarially robust learning leverage computational hardness?, 2018.
  18. Reachability Analysis of Deep Neural Networks with Provable Guarantees. Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI, pages 2651–2659, 2018.
  19. Reachability is NP-Complete Even for the Simplest Neural Networks. International Conference on Reachability Problems, 13035:149–164, 2021.
  20. Adversarial risk and the dangers of evaluating against weak attacks, 2018.
  21. Adrian Wurm. Complexity of Reachability Problems in Neural Networks. International Conference on Reachability Problems, 2023.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.