Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ADVREPAIR:Provable Repair of Adversarial Attack (2404.01642v1)

Published 2 Apr 2024 in cs.LG and cs.CR

Abstract: Deep neural networks (DNNs) are increasingly deployed in safety-critical domains, but their vulnerability to adversarial attacks poses serious safety risks. Existing neuron-level methods using limited data lack efficacy in fixing adversaries due to the inherent complexity of adversarial attack mechanisms, while adversarial training, leveraging a large number of adversarial samples to enhance robustness, lacks provability. In this paper, we propose ADVREPAIR, a novel approach for provable repair of adversarial attacks using limited data. By utilizing formal verification, ADVREPAIR constructs patch modules that, when integrated with the original network, deliver provable and specialized repairs within the robustness neighborhood. Additionally, our approach incorporates a heuristic mechanism for assigning patch modules, allowing this defense against adversarial attacks to generalize to other inputs. ADVREPAIR demonstrates superior efficiency, scalability and repair success rate. Different from existing DNN repair methods, our repair can generalize to general inputs, thereby improving the robustness of the neural network globally, which indicates a significant breakthrough in the generalization capability of ADVREPAIR.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (83)
  1. Z. Allen-Zhu and Y. Li. Feature purification: How adversarial training performs robust deep learning. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 977–988, Los Alamitos, CA, USA, feb 2022. IEEE Computer Society.
  2. Deepabstract: Neural network abstraction for accelerating verification. In ATVA 2020, volume 12302 of Lecture Notes in Computer Science, pages 92–107. Springer, 2020.
  3. Application of deep learning for retinal image analysis: A review. Computer Science Review, 35:100203, 2020.
  4. M. Balunovic and M. T. Vechev. Adversarial training and provable defenses: Bridging the gap. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
  5. Scalable quantitative verification for deep neural networks. In ICSE 2021, pages 312–323, Madrid, Spain, 2021. IEEE.
  6. Quantitative verification of neural networks and its security applications. In CCS 2019, November 11-15, 2019, pages 1249–1264, London, UK, 2019. ACM.
  7. Efficient neural network verification via layer-based semidefinite relaxations and linear cuts. IJCAI, 2021.
  8. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016.
  9. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020), 2020.
  10. Debugging machine learning models. In ICML Workshop on Reliable Machine Learning in the Wild, volume 103, 2016.
  11. Statistical guarantees for the robustness of bayesian neural networks. In S. Kraus, editor, IJCAI 2019, August 10-16, 2019, pages 5693–5700, Macao, China, 2019. ijcai.org.
  12. Robustness guarantees for bayesian inference with gaussian processes. In AAAI 2019, January 27 - February 1, 2019, pages 7759–7768, Honolulu, Hawaii, USA, 2019. AAAI Press.
  13. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. Ieee, 2017.
  14. F. Croce and M. Hein. Mind the box: l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-apgd for sparse adversarial attacks on image classifiers. In ICML, 2021.
  15. Eigentransfer: a unified framework for transfer learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 193–200, 2009.
  16. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  17. Output range analysis for deep feedforward neural networks. In A. Dutle, C. A. Muñoz, and A. Narkawicz, editors, NFM 2018, volume 10811 of Lecture Notes in Computer Science, pages 121–138, Newport News, VA, USA, 2018. Springer.
  18. R. Ehlers. Formal verification of piece-wise linear feed-forward neural networks. In ATVA 2017, pages 269–286, Pune, India, 2017. Springer.
  19. An abstraction-based framework for neural network verification. In S. K. Lahiri and C. Wang, editors, CAV 2020, volume 12224 of Lecture Notes in Computer Science, pages 43–65, Los Angeles, CA, USA, 2020. Springer.
  20. Deep residual learning based on resnet50 for COVID-19 recognition in lung CT images. In 8th International Conference on Control, Decision and Information Technologies, CoDIT 2022, Istanbul, Turkey, May 17-20, 2022, pages 407–412. IEEE, 2022.
  21. DL2: training and querying neural networks with logic. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 1931–1941. PMLR, 2019.
  22. F. Fu and W. Li. Sound and complete neural network repair with minimality and locality guarantees. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
  23. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096–2030, 2016.
  24. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP), pages 3–18. IEEE, 2018.
  25. Simplifying neural networks using formal verification. In NASA Formal Methods: 12th International Symposium, NFM 2020, Moffett Field, CA, USA, May 11–15, 2020, Proceedings 12, pages 85–93. Springer, 2020.
  26. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  27. ϵitalic-ϵ\epsilonitalic_ϵ-weakened robustness of deep neural networks. CoRR, abs/2110.15764, 2021.
  28. Safety verification of deep neural networks. In CAV 2017, pages 3–29, Heidelberg, Germany, 2017. Springer.
  29. Formal verification of ACAS x, an industrial airborne collision avoidance system. In A. Girault and N. Guan, editors, EMSOFT 2015, pages 127–136. IEEE, 2015.
  30. Verifying aircraft collision avoidance neural networks through linear approximations of safe regions. arXiv preprint arXiv:1903.00762, 2019.
  31. Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer, 2017.
  32. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV 2017, pages 97–117, Heidelberg, Germany, 2017. Springer.
  33. The marabou framework for verification and analysis of deep neural networks. In I. Dillig and S. Tasiran, editors, CAV 2019, volume 11561 of Lecture Notes in Computer Science, pages 443–452, New York City, NY, USA, 2019. Springer.
  34. A. Krizhevsky. Learning multiple layers of features from tiny images. Jan 2009.
  35. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278–2324, 1998.
  36. Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. In Static Analysis: 26th International Symposium, SAS 2019, Porto, Portugal, October 8–11, 2019, Proceedings 26, pages 296–319. Springer, 2019.
  37. Towards practical robustness analysis for dnns based on pac-model learning. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 2189–2201. ACM, 2022.
  38. Robustness verification of classification deep neural networks via linear programming. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11418–11427, 2019.
  39. Art: Abstraction refinement-guided training for provably correct neural networks. In 2020 Formal Methods in Computer Aided Design, FMCAD 2020, Haifa, Israel, September 21-24, 2020, pages 148–157. IEEE, 2020.
  40. Trojaning attack on neural networks. In NDSS, 2018.
  41. Vere: Verification guided synthesis for repairing deep neural networks. In 2024 IEEE/ACM 46th International Conference on Software Engineering (ICSE), pages 53–65, Los Alamitos, CA, USA, apr 2024. IEEE Computer Society.
  42. Mode: automated neural network model debugging via state differential analysis and input selection. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 175–186, 2018.
  43. Towards deep learning models resistant to adversarial attacks. In ICLR 2018, Vancouver, BC, Canada, 2018. OpenReview.net.
  44. Robustness of neural networks: a probabilistic and practical approach. In ICSE (NIER) 2019, Montreal, QC, Canada, May 29-31, 2019, pages 93–96, Montreal, QC, Canada, 2019. IEEE / ACM.
  45. TAPS: connecting certified and adversarial training. CoRR, abs/2305.04574, 2023.
  46. Certified training: Small boxes are all you need. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
  47. Verifying properties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
  48. An abstraction-refinement approach to verifying convolutional neural networks. In Automated Technology for Verification and Analysis - 20th International Symposium, ATVA 2022, Virtual Event, October 25-28, 2022, Proceedings, volume 13505 of Lecture Notes in Computer Science, pages 391–396. Springer, 2022.
  49. Reludiff: Differential verification of deep neural networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, pages 714–726, 2020.
  50. L. Pulina and A. Tacchella. Challenging smt solvers to verify neural networks. Ai Communications, 25(2):117–135, 2012.
  51. Few-shot guided mix for dnn repairing. In 2020 IEEE International Conference on Software Maintenance and Evolution (ICSME), pages 717–721. IEEE, 2020.
  52. Reachability analysis of deep neural networks with provable guarantees. In IJCAI 2018, pages 2651–2659, Stockholm, Sweden, 2018. ijcai.org.
  53. Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In S. Kraus, editor, IJCAI 2019, pages 5944–5952, Macao, China, 2019. ijcai.org.
  54. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Y. Bengio and Y. LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
  55. Beyond the single neuron convex barrier for neural network certification. In NeurIPS 2019, pages 15072–15083, 2019.
  56. Fast and effective robustness certification. Advances in neural information processing systems, 31, 2018.
  57. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):1–30, 2019.
  58. An abstract domain for certifying neural networks. PACMPL, 3(POPL):41:1–41:30, 2019.
  59. Arachne: Search based repair of deep neural networks. ACM Transactions on Software Engineering and Methodology, 2022.
  60. M. Sotoudeh and A. V. Thakur. Provable repair of deep neural networks. In S. N. Freund and E. Yahav, editors, PLDI ’21: 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, Virtual Event, Canada, June 20-25, 2021, pages 588–603. ACM, 2021.
  61. Causality-based neural network repair. In 44th IEEE/ACM 44th International Conference on Software Engineering, ICSE 2022, Pittsburgh, PA, USA, May 25-27, 2022, pages 338–349. ACM, 2022.
  62. Testing deep neural networks. arXiv preprint arXiv:1803.04792, 2018.
  63. Architecture-preserving provable repair of deep neural networks. Proc. ACM Program. Lang., 7(PLDI):443–467, 2023.
  64. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th international conference on software engineering, pages 303–314, 2018.
  65. F. Tramer and D. Boneh. Adversarial training and robustness for multiple perturbations. Advances in neural information processing systems, 32, 2019.
  66. Verification of deep convolutional neural networks using imagestars. In S. K. Lahiri and C. Wang, editors, CAV 2020, volume 12224 of Lecture Notes in Computer Science, pages 18–42, Los Angeles, CA, USA, 2020. Springer.
  67. Star-based reachability analysis of deep neural networks. In M. H. ter Beek, A. McIver, and J. N. Oliveira, editors, FM 2019, volume 11800 of Lecture Notes in Computer Science, pages 670–686, Porto, Portugal, 2019. Springer.
  68. Using deep learning to investigate the neuroimaging correlates of psychiatric and neurological disorders: Methods and applications. Neuroscience & Biobehavioral Reviews, 74:58–75, 2017.
  69. C. von Essen and D. Giannakopoulou. Analyzing the next generation airborne collision avoidance system. In E. Ábrahám and K. Havelund, editors, TACAS 2014, volume 8413 of Lecture Notes in Computer Science, pages 620–635. Springer, 2014.
  70. Adversarial sample detection for deep neural network through model mutation testing. In J. M. Atlee, T. Bultan, and J. Whittle, editors, Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019, pages 1245–1256. IEEE / ACM, 2019.
  71. Residual convolutional ctc networks for automatic speech recognition. arXiv preprint arXiv:1702.07793, 2017.
  72. A statistical approach to assessing neural network robustness. In ICLR 2019, New Orleans, LA, USA, 2019. OpenReview.net.
  73. PROVEN: verifying robustness of neural networks with a probabilistic approach. In ICML 2019, 9-15 June 2019, volume 97 of Proceedings of Machine Learning Research, pages 6727–6736, Long Beach, California, USA, 2019. PMLR.
  74. Towards fast computation of certified robustness for relu networks. In J. G. Dy and A. Krause, editors, ICML 2018, volume 80 of Proceedings of Machine Learning Research, pages 5273–5282, Stockholm, Sweden, 2018. PMLR.
  75. Feature-guided black-box safety testing of deep neural networks. In D. Beyer and M. Huisman, editors, TACAS 2018, volume 10805 of Lecture Notes in Computer Science, pages 408–426, Thessaloniki, Greece, 2018. Springer.
  76. Probabilistic safety for bayesian neural networks. In R. P. Adams and V. Gogate, editors, UAI 2020, August 3-6, 2020, volume 124 of Proceedings of Machine Learning Research, pages 1198–1207, virtual online, 2020. AUAI Press.
  77. A game-based approximate verification of deep neural networks with provable guarantees. Theor. Comput. Sci., 807:298–329, 2020.
  78. Online adaptation to label distribution shift. Advances in Neural Information Processing Systems, 34:11340–11351, 2021.
  79. Automatic perturbation analysis for scalable certified robustness and beyond. Advances in Neural Information Processing Systems, 33:1129–1141, 2020.
  80. Improving neural network verification through spurious region guided refinement. In TACAS 2021, volume 12651 of Lecture Notes in Computer Science, pages 389–408. Springer, 2021.
  81. Transfer learning via learning to transfer. In International conference on machine learning, pages 5085–5094. PMLR, 2018.
  82. Deeprepair: Style-guided repairing for deep neural networks in the real-world operational environment. IEEE Transactions on Reliability, 71(4):1401–1416, 2021.
  83. Theoretically principled trade-off between robustness and accuracy. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 7472–7482. PMLR, 2019.

Summary

We haven't generated a summary for this paper yet.