Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Role of Randomization in Adversarially Robust Classification (2302.07221v3)

Published 14 Feb 2023 in cs.LG

Abstract: Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers. Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results. Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, i.e. randomized ensembles and parametric/input noise injection.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. On the existence of the adversarial bayes classifier. Advances in Neural Information Processing Systems, 34:2978–2990, 2021a.
  2. On the existence of the adversarial bayes classifier (extended version). arXiv preprint arXiv:2112.01694, 2021b.
  3. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.
  4. Chris M Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation, 7(1):108–116, 1995.
  5. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security, pages 3–14, 2017.
  6. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
  7. D Charalambos and Border Aliprantis. Infinite Dimensional Analysis: A Hitchhiker’s Guide. Springer-Verlag Berlin and Heidelberg GmbH & Company KG, 2006.
  8. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning, pages 1310–1320. PMLR, 2019.
  9. Boolean functions: Theory, algorithms, and applications. Cambridge University Press, 2011.
  10. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pages 2206–2216. PMLR, 2020.
  11. Exploiting joint robustness to adversarial perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1122–1131, 2020.
  12. Adversarial vulnerability of randomized ensembles. In International Conference on Machine Learning, pages 4890–4917. PMLR, 2022.
  13. On the robustness of randomized ensembles to adversarial perturbations. In International Conference on Machine Learning, pages 7303–7328. PMLR, 2023.
  14. Stochastic activation pruning for robust adversarial defense. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=H1uR4GZRZ.
  15. Adversarial risk and robustness: General definitions and implications for the uniform distribution. Advances in Neural Information Processing Systems, 31, 2018.
  16. Existence and minimax theorems for adversarial surrogate risks in binary classification. arXiv preprint arXiv:2206.09098, 2022.
  17. A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence, 14(771-780):1612, 1999.
  18. Game theory. MIT press, 1991.
  19. Explaining and harnessing adversarial examples. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6572.
  20. On the hardness of robust classification. The Journal of Machine Learning Research, 22(1):12521–12549, 2021.
  21. Noise injection: Theoretical prospects. Neural Computation, 9(5):1093–1108, 1997.
  22. Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 588–597, 2019.
  23. Varun Jog. Reverse euclidean and gaussian isoperimetric inequalities for parallel sets with applications. IEEE Transactions on Information Theory, 67(10):6368–6383, 2021.
  24. Improving adversarial robustness of ensembles with diversity training. CoRR, abs/1901.09981, 2019. URL http://arxiv.org/abs/1901.09981.
  25. Adversarial machine learning at scale. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=BJm4T4Kgx.
  26. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pages 656–672. IEEE, 2019.
  27. Certified adversarial robustness with additive noise. Advances in neural information processing systems, 32, 2019.
  28. Towards robust neural networks via random self-ensemble. In Proceedings of the European Conference on Computer Vision (ECCV), pages 369–385, 2018.
  29. Randomness in ml defenses helps persistent attackers and hinders evaluators. arXiv preprint arXiv:2302.13464, 2023.
  30. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=rJzIBfZAb.
  31. Mixed nash equilibria in the adversarial examples game. In International Conference on Machine Learning, pages 7677–7687. PMLR, 2021.
  32. Robustness via curvature regularization, and vice versa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9078–9086, 2019.
  33. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning, pages 4970–4979. PMLR, 2019.
  34. Local competition and stochasticity for adversarial robustness in deep learning. In International Conference on Artificial Intelligence and Statistics, pages 3862–3870. PMLR, 2021a.
  35. Stochastic local winner-takes-all networks enable profound adversarial robustness. arXiv preprint arXiv:2112.02671, 2021b.
  36. Robust attacks against multiple classifiers. arXiv preprint arXiv:1906.02816, 2019.
  37. Theoretical evidence for adversarial robustness through randomization. Advances in Neural Information Processing Systems, 32, 2019.
  38. Randomization matters how to defend against strong adversarial attacks. In International Conference on Machine Learning, pages 7717–7727. PMLR, 2020.
  39. On the robustness of randomized classifiers to adversarial examples. Machine Learning, 111(9):3425–3457, 2022.
  40. The many faces of adversarial risk: An expanded study. IEEE Transactions on Information Theory, pages 1–1, 2023. doi: 10.1109/TIT.2023.3303221.
  41. Ensembles of locally independent prediction models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 5527–5536, 2020.
  42. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in Neural Information Processing Systems, 32, 2019.
  43. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6199.
  44. On the existence of solutions to adversarial training in multiclass classification. arXiv preprint arXiv:2305.00075, 2023a.
  45. The multimarginal optimal transport formulation of adversarial multiclass classification. Journal of Machine Learning Research, 24(45):1–56, 2023b.
  46. John Von Neumann. On the theory of games of strategy. Contributions to the Theory of Games, 4:13–42, 1959.
  47. Resnets ensemble via the feynman-kac formalism to improve natural and robust accuracies. Advances in Neural Information Processing Systems, 32, 2019.
  48. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017.
  49. Dverge: diversifying vulnerabilities for enhanced robust generation of ensembles. Advances in Neural Information Processing Systems, 33:5505–5515, 2020.
  50. Trs: Transferability reduced ensemble via promoting gradient diversity and model smoothness. Advances in Neural Information Processing Systems, 34:17642–17655, 2021.
  51. Simple and effective stochastic neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3252–3260, 2021.
  52. Building robust ensembles via margin boosting. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 26669–26692. PMLR, 17–23 Jul 2022.
Citations (3)

Summary

We haven't generated a summary for this paper yet.