Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tightening the Approximation Error of Adversarial Risk with Auto Loss Function Search (2111.05063v2)

Published 9 Nov 2021 in cs.LG and cs.AI

Abstract: Despite achieving great success, Deep Neural Networks (DNNs) are vulnerable to adversarial examples. How to accurately evaluate the adversarial robustness of DNNs is critical for their deployment in real-world applications. An ideal indicator of robustness is adversarial risk. Unfortunately, since it involves maximizing the 0-1 loss, calculating the true risk is technically intractable. The most common solution for this is to compute an approximate risk by replacing the 0-1 loss with a surrogate one. Some functions have been used, such as Cross-Entropy (CE) loss and Difference of Logits Ratio (DLR) loss. However, these functions are all manually designed and may not be well suited for adversarial robustness evaluation. In this paper, we leverage AutoML to tighten the error (gap) between the true and approximate risks. Our main contributions are as follows. First, AutoLoss-AR, the first method to search for surrogate losses for adversarial risk, with an elaborate search space, is proposed. The experimental results on 10 adversarially trained models demonstrate the effectiveness of the proposed method: the risks evaluated using the best-discovered losses are 0.2% to 1.6% better than those evaluated using the handcrafted baselines. Second, 5 surrogate losses with clean and readable formulas are distilled out and tested on 7 unseen adversarially trained models. These losses outperform the baselines by 0.8% to 2.4%, indicating that they can be used individually as some kind of new knowledge. Besides, the possible reasons for the better performance of these losses are explored.

Citations (2)

Summary

We haven't generated a summary for this paper yet.