Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploiting the Sensitivity of $L_2$ Adversarial Examples to Erase-and-Restore (2001.00116v2)

Published 1 Jan 2020 in cs.CV, cs.CR, cs.LG, and eess.IV

Abstract: By adding carefully crafted perturbations to input images, adversarial examples (AEs) can be generated to mislead neural-network-based image classifiers. $L_2$ adversarial perturbations by Carlini and Wagner (CW) are among the most effective but difficult-to-detect attacks. While many countermeasures against AEs have been proposed, detection of adaptive CW-$L_2$ AEs is still an open question. We find that, by randomly erasing some pixels in an $L_2$ AE and then restoring it with an inpainting technique, the AE, before and after the steps, tends to have different classification results, while a benign sample does not show this symptom. We thus propose a novel AE detection technique, Erase-and-Restore (E&R), that exploits the intriguing sensitivity of $L_2$ attacks. Experiments conducted on two popular image datasets, CIFAR-10 and ImageNet, show that the proposed technique is able to detect over 98% of $L_2$ AEs and has a very low false positive rate on benign images. The detection technique exhibits high transferability: a detection system trained using CW-$L_2$ AEs can accurately detect AEs generated using another $L_2$ attack method. More importantly, our approach demonstrates strong resilience to adaptive $L_2$ attacks, filling a critical gap in AE detection. Finally, we interpret the detection technique through both visualization and quantification.

Citations (1)

Summary

We haven't generated a summary for this paper yet.