2000 character limit reached
VC Classes are Adversarially Robustly Learnable, but Only Improperly (1902.04217v2)
Published 12 Feb 2019 in cs.LG and stat.ML
Abstract: We study the question of learning an adversarially robust predictor. We show that any hypothesis class $\mathcal{H}$ with finite VC dimension is robustly PAC learnable with an improper learning rule. The requirement of being improper is necessary as we exhibit examples of hypothesis classes $\mathcal{H}$ with finite VC dimension that are not robustly PAC learnable with any proper learning rule.