Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 160 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 41 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 417 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

Expressive Losses for Verified Robustness via Convex Combinations (2305.13991v3)

Published 23 May 2023 in cs.LG, cs.CR, and stat.ML

Abstract: In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance. As shown in recent work, better trade-offs between accuracy and robustness can be obtained by carefully coupling adversarial training with over-approximations. We hypothesize that the expressivity of a loss function, which we formalize as the ability to span a range of trade-offs between lower and upper bounds to the worst-case loss through a single parameter (the over-approximation coefficient), is key to attaining state-of-the-art performance. To support our hypothesis, we show that trivial expressive losses, obtained via convex combinations between adversarial attacks and IBP bounds, yield state-of-the-art results across a variety of settings in spite of their conceptual simplicity. We provide a detailed analysis of the relationship between the over-approximation coefficient and performance profiles across different expressive losses, showing that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (83)
  1. Adversarial for good? How the adversarial ML community’s values impede socially beneficial uses of attacks. In ICML 2021 workshop on A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning, 2021.
  2. Strong mixed-integer programming formulations for trained neural networks. Mathematical Programming, 2020.
  3. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In International Conference on Machine Learning, 2018.
  4. The second international verification of neural networks competition (VNN-COMP 2021): Summary and results. arXiv preprint arXiv:2109.00498, 2021.
  5. Adversarial training and provable defenses: Bridging the gap. International Conference on Learning Representations, 2020.
  6. Efficient verification of neural networks via dependency analysis. In AAAI Conference on Artificial Intelligence, 2020.
  7. Convex optimization. Cambridge university press, 2004.
  8. A unified view of piecewise linear neural network verification. Neural Information Processing Systems, 2018.
  9. Lagrangian decomposition for neural network verification. Conference on Uncertainty in Artificial Intelligence, 2020a.
  10. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020), 2020b.
  11. Rich Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997.
  12. A downsampled variant of ImageNet as an alternative to the CIFAR datasets. arXiv:1707.08819, 2017.
  13. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, 2019.
  14. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International Conference on Machine Learning, 2020.
  15. Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming. In Neural Information Processing Systems, 2020.
  16. Scaling the convex barrier with active sets. International Conference on Learning Representations, 2021a.
  17. Scaling the convex barrier with sparse dual algorithms. arXiv preprint arXiv:2101.05844, 2021b.
  18. Improved branch and bound for neural network verification via Lagrangian decomposition. arXiv preprint arXiv:2104.06718, 2021c.
  19. IBP regularization for verified adversarial robustness via branch-and-bound. In ICML 2022 Workshop on Formal Verification of Machine Learning, 2022.
  20. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2009.
  21. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  9185–9193, 2018.
  22. Ruediger Ehlers. Formal verification of piece-wise linear feed-forward neural networks. Automated Technology for Verification and Analysis, 2017.
  23. Adversarial training and provable robustness: A tale of two objectives. In AAAI Conference on Artificial Intelligence, 2021.
  24. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 2022.
  25. Complete verification via multi-neuron relaxation guided branch-and-bound. International Conference on Learning Representations, 2022.
  26. Explaining and harnessing adversarial examples. International Conference on Learning Representations, 2015.
  27. On the effectiveness of interval bound propagation for training verifiably robust models. Workshop on Security in Machine Learning, NeurIPS, 2018.
  28. P. Henriksen and A. Lomuscio. Efficient neural network verification via adaptive refinement and adversarial search. In European Conference on Artificial Intelligence, 2020.
  29. P. Henriksen and A. Lomuscio. Deepsplit: An efficient splitting method for neural network verification via indirect effect analysis. In Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI21), 2021.
  30. Interval arithmetic: From principles to implementation. Journal of the ACM (JACM), 2001.
  31. Training certifiably robust neural networks with efficient local lipschitz bounds. In Neural Information Processing Systems, 2021.
  32. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
  33. On the paradox of certified training. In Transactions on Machine Learning Research, 2022.
  34. Highly accurate protein structure prediction with alphafold. Nature, 2021.
  35. Reluplex: An efficient SMT solver for verifying deep neural networks. Computer Aided Verification, 2017.
  36. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.
  37. In defense of the unitary scalarization for deep multi-task learning. In Neural Information Processing Systems, 2022.
  38. Tight neural network verification via semidefinite relaxations and linear reformulations. In AAAI Conference on Artificial Intelligence, 2022.
  39. Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge. 2015.
  40. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010.
  41. Towards better understanding of training certifiably robust models against adversarial examples. In Neural Information Processing Systems, 2021.
  42. Globally-robust neural networks. In International Conference on Machine Learning, 2021.
  43. An approach to reachability analysis for feed-forward ReLU neural networks. arXiv:1706.07351, 2017.
  44. Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations, 2018.
  45. TAPS: Connecting certified and adversarial training. Neural Information Processing Systems, 2023.
  46. Understanding certified training with interval bound propagation. In International Conference on Learning Representations, 2024.
  47. Differentiable abstract interpretation for provably robust neural networks. International Conference on Machine Learning, 2018.
  48. The second international verification of neural networks competition (VNN-COMP 2021): Summary and results. arXiv preprint arXiv:2212.10376, 2022.
  49. PRIMA: General and precise neural network certification via scalable convex hull approximations. Proceedings of the ACM on Programming Languages, 2022.
  50. Certified training: Small boxes are all you need. In International Conference on Learning Representations, 2023.
  51. Pytorch: An imperative style, high-performance deep learning library. Neural Information Processing Systems, 2019.
  52. Semidefinite relaxations for certifying robustness to adversarial examples. Neural Information Processing Systems, 2018.
  53. Provably robust deep learning via adversarially trained smoothed classifiers. In Neural Information Processing Systems, 2019.
  54. Fast certified robust training with short warmup. In Neural Information Processing Systems, 2021.
  55. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  56. Fast and effective robustness certification. Neural Information Processing Systems, 2018.
  57. Beyond the single neuron convex barrier for neural network certification. Neural Information Processing Systems, 2019a.
  58. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 2019b.
  59. Revisiting adversarial training for ImageNet: Architectures, training and generalization across threat models. arXiv:2303.01870, 2023.
  60. Skew orthogonal convolutions. In International Conference on Machine Learning, 2021.
  61. Improved techniques for deterministic l2 robustness. In Neural Information Processing Systems, 2022.
  62. Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100. In International Conference on Learning Representations, 2022.
  63. Teruo Sunaga. Theory of an interval algebra and its application to numerical analysis. RAAG Memoirs, 1958.
  64. Intriguing properties of neural networks. International Conference on Learning Representations, 2014.
  65. The convex relaxation barrier, revisited: Tightened single-neuron relaxations for neural network verification. Neural Information Processing Systems, 2020.
  66. Orthogonalizing convolutional layers with the cayley transform. In International Conference on Learning Representations, 2021.
  67. Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, 2018.
  68. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. Neural Information Processing Systems, 2021.
  69. On the convergence of certified robust training with interval bound propagation. In International Conference on Learning Representations, 2022.
  70. Provable defenses against adversarial examples via the convex outer adversarial polytope. International Conference on Machine Learning, 2018.
  71. Scaling provable adversarial defenses. Neural Information Processing Systems, 2018.
  72. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations, 2020.
  73. Training for faster adversarial robustness verification via inducing relu stability. International Conference on Learning Representations, 2019.
  74. Do current multi-task optimization methods in deep learning even help? In Neural Information Processing Systems, 2022.
  75. Automatic perturbation analysis for scalable certified robustness and beyond. In Neural Information Processing Systems, 2020.
  76. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In International Conference on Learning Representations, 2021.
  77. Lot: Layer-wise orthogonal training on improving l2 certified robustness. In Neural Information Processing Systems, 2022.
  78. Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons. In International Conference on Machine Learning, 2021.
  79. Boosting the certified robustness of l-infinity distance nets. In International Conference on Learning Representations, 2022a.
  80. Rethinking lipschitz neural networks and certified robustness: A boolean function perspective. In Neural Information Processing Systems, 2022b.
  81. Efficient neural network robustness certification with general activation functions. Neural Information Processing Systems, 2018.
  82. Towards stable and efficient training of verifiably robust neural networks. International Conference on Learning Representations, 2020.
  83. General cutting planes for bound-propagation-based neural network verification. Neural Information Processing Systems, 2022c.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 2 likes.

Upgrade to Pro to view all of the tweets about this paper: