Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provable Robustness of Adversarial Training for Learning Halfspaces with Noise (2104.09437v1)

Published 19 Apr 2021 in cs.LG, cs.CR, math.OC, and stat.ML

Abstract: We analyze the properties of adversarial training for learning adversarially robust halfspaces in the presence of agnostic label noise. Denoting $\mathsf{OPT}{p,r}$ as the best robust classification error achieved by a halfspace that is robust to perturbations of $\ell{p}$ balls of radius $r$, we show that adversarial training on the standard binary cross-entropy loss yields adversarially robust halfspaces up to (robust) classification error $\tilde O(\sqrt{\mathsf{OPT}{2,r}})$ for $p=2$, and $\tilde O(d{1/4} \sqrt{\mathsf{OPT}{\infty, r}} + d{1/2} \mathsf{OPT}{\infty,r})$ when $p=\infty$. Our results hold for distributions satisfying anti-concentration properties enjoyed by log-concave isotropic distributions among others. We additionally show that if one instead uses a nonconvex sigmoidal loss, adversarial training yields halfspaces with an improved robust classification error of $O(\mathsf{OPT}{2,r})$ for $p=2$, and $O(d{1/4}\mathsf{OPT}_{\infty, r})$ when $p=\infty$. To the best of our knowledge, this is the first work to show that adversarial training provably yields robust classifiers in the presence of noise.

Citations (11)

Summary

We haven't generated a summary for this paper yet.