Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks (1911.10695v3)

Published 25 Nov 2019 in cs.LG, cs.CR, cs.CV, and stat.ML

Abstract: Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks. Since then, extensive efforts have been devoted to enhancing the robustness of deep networks via specialized learning algorithms and loss functions. In this work, we take an architectural perspective and investigate the patterns of network architectures that are resilient to adversarial attacks. To obtain the large number of networks needed for this study, we adopt one-shot neural architecture search, training a large network for once and then finetuning the sub-networks sampled therefrom. The sampled architectures together with the accuracies they achieve provide a rich basis for our study. Our "robust architecture Odyssey" reveals several valuable observations: 1) densely connected patterns result in improved robustness; 2) under computational budget, adding convolution operations to direct connection edge is effective; 3) flow of solution procedure (FSP) matrix is a good indicator of network robustness. Based on these observations, we discover a family of robust architectures (RobNets). On various datasets, including CIFAR, SVHN, Tiny-ImageNet, and ImageNet, RobNets exhibit superior robustness performance to other widely used architectures. Notably, RobNets substantially improve the robust accuracy (~5% absolute gains) under both white-box and black-box attacks, even with fewer parameter numbers. Code is available at https://github.com/gmh14/RobNets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Minghao Guo (45 papers)
  2. Yuzhe Yang (43 papers)
  3. Rui Xu (199 papers)
  4. Ziwei Liu (368 papers)
  5. Dahua Lin (336 papers)
Citations (150)

Summary

Overview of "When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks"

The paper "When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks" addresses a crucial aspect in the field of neural networks—the robustness of deep learning models against adversarial attacks. Unlike the traditional focus on adversarial defenses such as specialized learning algorithms or loss functions, this research investigates the intrinsic impact of network architectures on robustness. The paper leverages Neural Architecture Search (NAS) to examine architectural patterns that enhance resilience to adversarial perturbations.

Methodology

The authors utilize one-shot NAS to explore a comprehensive set of network architectures. This method allows the training of a large supernet, from which sub-networks are sampled and fine-tuned to evaluate their robustness. The research primarily investigates three questions: the architectural patterns that are crucial for adversarial robustness, the optimal allocation of model capacity under computational constraints, and indicators of a robust architecture.

Findings

  1. Densely Connected Patterns: The paper reveals that densely connected architectural patterns significantly enhance a network's robustness. This finding aligns with observations where DenseNet models perform robustly compared to architectures with fewer dense connections.
  2. Strategies for Different Budgets: Under a set computational budget, adding convolution operations to direct edges—rather than skip connections—proved more effective in enhancing robustness. This efficiency is particularly notable under lower computational budgets.
  3. FSP Matrix as an Indicator: The Flow of Solution Procedure (FSP) matrix is identified as a potential indicator of network robustness. A robust network exhibits a low FSP matrix loss, especially in its deeper layers.

Results

The implementation of the discovered architectural insights leads to the development of the RobNet family, a series of architectures that exhibit superior adversarial robustness on various datasets including CIFAR, SVHN, Tiny-ImageNet, and ImageNet. Notably, RobNets achieve robust accuracy improvements of approximately 5%, demonstrating enhanced resilience even when constrained by parameter count or computational resources. Additionally, RobNet models manage to generalize their robust properties across different datasets, further underscoring the transferability and efficacy of the NAS-derived architectural recommendations.

Implications and Future Directions

The findings of this research hold significant implications for the design and deployment of neural networks in environments subject to adversarial threats. By emphasizing the structure of network architectures as a key component of adversarial defense, this paper broadens the scope of research in adversarial robustness beyond algorithmic enhancements.

Looking forward, further exploration into NAS-derived architectures may yield even more robust models, capable of withstanding increasingly sophisticated adversarial attacks. The symbiotic relationship between NAS and adversarial robustness can also pave the way for automated systems that adaptively fine-tune architectures in response to evolving threats.

Overall, this paper contributes a novel perspective to adversarial defense strategies by aligning neural architecture design with robustness, offering a pathway to more resilient AI systems.