Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness (2111.09277v1)

Published 17 Nov 2021 in cs.LG and cs.AI

Abstract: Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations. Under the paradigm, the robustness of a classifier is aligned with the prediction confidence, i.e., the higher confidence from a smoothed classifier implies the better robustness. This motivates us to rethink the fundamental trade-off between accuracy and robustness in terms of calibrating confidences of a smoothed classifier. In this paper, we propose a simple training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup: it trains on convex combinations of samples along the direction of adversarial perturbation for each input. The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness in case of smoothed classifiers, and offers an intuitive way to adaptively set a new decision boundary between these samples for better robustness. Our experimental results demonstrate that the proposed method can significantly improve the certified $\ell_2$-robustness of smoothed classifiers compared to existing state-of-the-art robust training methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jongheon Jeong (26 papers)
  2. Sejun Park (28 papers)
  3. Minkyu Kim (51 papers)
  4. Heung-Chang Lee (6 papers)
  5. Doguk Kim (1 paper)
  6. Jinwoo Shin (196 papers)
Citations (51)

Summary

We haven't generated a summary for this paper yet.