Deterministic Certification to Adversarial Attacks via Bernstein Polynomial Approximation (2011.14085v1)
Abstract: Randomized smoothing has established state-of-the-art provable robustness against $\ell_2$ norm adversarial attacks with high probability. However, the introduced Gaussian data augmentation causes a severe decrease in natural accuracy. We come up with a question, "Is it possible to construct a smoothed classifier without randomization while maintaining natural accuracy?". We find the answer is definitely yes. We study how to transform any classifier into a certified robust classifier based on a popular and elegant mathematical tool, Bernstein polynomial. Our method provides a deterministic algorithm for decision boundary smoothing. We also introduce a distinctive approach of norm-independent certified robustness via numerical solutions of nonlinear systems of equations. Theoretical analyses and experimental results indicate that our method is promising for classifier smoothing and robustness certification.
- Ching-Chia Kao (4 papers)
- Jhe-Bang Ko (1 paper)
- Chun-Shien Lu (27 papers)