Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NoisyMix: Boosting Model Robustness to Common Corruptions (2202.01263v2)

Published 2 Feb 2022 in cs.LG and stat.ML

Abstract: For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic. Relatedly, data augmentation schemes have been shown to improve robustness with respect to input perturbations and domain shifts. Motivated by this, we introduce NoisyMix, a novel training scheme that promotes stability as well as leverages noisy augmentations in input and feature space to improve both model robustness and in-domain accuracy. NoisyMix produces models that are consistently more robust and that provide well-calibrated estimates of class membership probabilities. We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Moreover, we provide theory to understand implicit regularization and robustness of NoisyMix.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. N. Benjamin Erichson (45 papers)
  2. Soon Hoe Lim (18 papers)
  3. Winnie Xu (12 papers)
  4. Francisco Utrera (3 papers)
  5. Ziang Cao (17 papers)
  6. Michael W. Mahoney (233 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.