Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Agnostic Multi-Robust Learning Using ERM (2303.08944v2)

Published 15 Mar 2023 in cs.LG, cs.CR, and cs.CV

Abstract: A fundamental problem in robust learning is asymmetry: a learner needs to correctly classify every one of exponentially-many perturbations that an adversary might make to a test-time natural example. In contrast, the attacker only needs to find one successful perturbation. Xiang et al.[2022] proposed an algorithm that in the context of patch attacks for image classification, reduces the effective number of perturbations from an exponential to a polynomial number of perturbations and learns using an ERM oracle. However, to achieve its guarantee, their algorithm requires the natural examples to be robustly realizable. This prompts the natural question; can we extend their approach to the non-robustly-realizable case where there is no classifier with zero robust error? Our first contribution is to answer this question affirmatively by reducing this problem to a setting in which an algorithm proposed by Feige et al.[2015] can be applied, and in the process extend their guarantees. Next, we extend our results to a multi-group setting and introduce a novel agnostic multi-robust learning problem where the goal is to learn a predictor that achieves low robust loss on a (potentially) rich collection of subgroups.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (28)
  1. Improved generalization bounds for adversarially robust learning. Journal of Machine Learning Research, 23(175):1–31, 2022.
  2. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929–965, 1989.
  3. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017.
  4. Certified defenses for adversarial patches. CoRR, abs/2003.06693, 2020. URL https://arxiv.org/abs/2003.06693.
  5. Learning and inference in the presence of corrupted inputs. In Peter Grünwald, Elad Hazan, and Satyen Kale, editors, Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, volume 40 of JMLR Workshop and Conference Proceedings, pages 637–657. JMLR.org, 2015. URL http://proceedings.mlr.press/v40/Feige15.html.
  6. Game theory, on-line prediction and boosting. In Proceedings of the ninth annual conference on Computational learning theory, pages 325–332, 1996.
  7. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119–139, 1997.
  8. Beyond the frontier: Fairness without accuracy loss. CoRR, abs/2201.10408, 2022. URL https://arxiv.org/abs/2201.10408.
  9. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  10. Loss minimization through the lens of outcome indistinguishability. arXiv preprint arXiv:2210.08649, 2022.
  11. Calibration for the (computationally-identifiable) masses. CoRR, abs/1711.08513, 2017. URL http://arxiv.org/abs/1711.08513.
  12. Satyen Kale. Efficient algorithms using the multiplicative weights update method. Princeton University, 2007.
  13. Lavan: Localized and visible adversarial noise. In International Conference on Machine Learning, pages 2507–2515. PMLR, 2018.
  14. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In Jennifer G. Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2569–2577. PMLR, 2018. URL http://proceedings.mlr.press/v80/kearns18a.html.
  15. Multiaccuracy: Black-box post-processing for fairness in classification. AIES ’19, page 247–254, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450363242. doi: 10.1145/3306618.3314287. URL https://doi.org/10.1145/3306618.3314287.
  16. The weighted majority algorithm. Information and computation, 108(2):212–261, 1994.
  17. Minority reports defense: Defending against adversarial patches. CoRR, abs/2004.13799, 2020. URL https://arxiv.org/abs/2004.13799.
  18. Efficient certified defenses against patch attacks on image classifiers. CoRR, abs/2102.04154, 2021. URL https://arxiv.org/abs/2102.04154.
  19. VC classes are adversarially robustly learnable, but only improperly. In Alina Beygelzimer and Daniel Hsu, editors, Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pages 2512–2530, Phoenix, USA, 25–28 Jun 2019. PMLR.
  20. Reducing adversarially robust learning to non-robust PAC learning. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/a822554e5403b1d370db84cfbc530503-Abstract.html.
  21. Multi-group agnostic PAC learnability. CoRR, abs/2105.09989, 2021. URL https://arxiv.org/abs/2105.09989.
  22. Simple and near-optimal algorithms for hidden stratification and multi-group learning. CoRR, abs/2112.12181, 2021. URL https://arxiv.org/abs/2112.12181.
  23. V. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer-Verlag, New York, 1982.
  24. Patchguard++: Efficient provable attack detection against adversarial patches. CoRR, abs/2104.12609, 2021. URL https://arxiv.org/abs/2104.12609.
  25. Patchguard: Provable defense against adversarial patches using masks on small receptive fields. CoRR, abs/2005.10884, 2020. URL https://arxiv.org/abs/2005.10884.
  26. Patchcleanser: Certifiably robust defense against adversarial patches for any image classifier. In 31st USENIX Security Symposium (USENIX Security 22), pages 2065–2082, 2022.
  27. Patchattack: A black-box texture-based attack with reinforcement learning. In European Conference on Computer Vision, pages 681–698. Springer, 2020.
  28. Clipped bagnet: Defending against sticker attacks with clipped bag-of-features. In 2020 IEEE Security and Privacy Workshops (SPW), pages 55–61. IEEE, 2020.

Summary

We haven't generated a summary for this paper yet.