Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 93 tok/s Pro
Kimi K2 205 tok/s Pro
GPT OSS 120B 426 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius (2001.02378v4)

Published 8 Jan 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.

Citations (167)

Summary

An Expert Overview of MACER: Attack-Free and Scalable Robust Training via Maximizing Certified Radius

The paper "MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius" introduces a novel approach to robust training of machine learning models, particularly targeting the limitations of adversarial training methods. Traditional adversarial training methods, while popular, suffer from being attack-dependent and computationally expensive due to the iterative nature of generating adversarial examples. The authors propose a methodology, MACER, which circumvents these issues by focusing on maximizing certified radius — a parameter that represents a provable robustness measure — rather than on adversarial examples generation.

Key Contributions

  1. Concept Introduction: MACER, which stands for MAximizing Certified Radius, offers a distinct departure from previous adversarial training approaches by eliminating attack dependency. Instead, the paper leverages randomized smoothing, a technique that facilitates establishing a certified radius that guarantees unchanged predictions within a certain perturbation range.
  2. Algorithm Efficiency: By training models to maximize the certified radius, MACER presents a more efficient methodology. With randomized smoothing providing the certified limits analytically, this method foregoes the need for time-consuming attack iterations. This inherently speeds up the training process. The experiments highlight this by comparing MACER with adversarial training in terms of time and robustness results.
  3. Technical Formulations: The theoretical framework for MACER includes intricate mathematical constructs to ensure surrogate loss functions adhere to desirable properties — differentiability, numerical stability, and serving as an upper bound for classification errors. This is achieved through innovations like soft randomized smoothing which guarantees differentiable bounds.
  4. Empirical Evaluation: Across various datasets, including CIFAR-10, ImageNet, MNIST, and SVHN, MACER demonstrates superior performance in terms of certified radius and empirically measured model robustness. Notably, it trains models that achieve larger average certified radii compared to state-of-the-art adversarially trained models, often with reduced training times.

Implications and Future Directions

The MACER algorithm opens new avenues for efficiently training models to be robust against adversarial attacks without being tied to specific adversarial example generation strategies. This research also suggests potential benefits in adapting similar methodologies to other domains where robustness and computational efficiency are critical.

Moreover, future work may build upon MACER's framework to explore semi-supervised learning scenarios, enabling robust generalization even in the presence of unlabeled data, which has been shown in recent research to aid adversarial robustness. The notion that robustness training does not necessarily depend on adversarial samples sheds light on alternative certification-based techniques that could revolutionize robust machine learning model development.

Ultimately, the MACER methodology represents a foundational shift that could lead to more scalable and practically applicable robust learning strategies, marking a step forward in both theoretical and applied dimensions of machine learning security.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.