2000 character limit reached
Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning (2010.00071v1)
Published 30 Sep 2020 in cs.LG and stat.ML
Abstract: Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense to adversarial examples that was attacked and found to be broken by the "Obfuscated Gradients" paper (Athalye et al., 2018). We discover a flaw in the re-implementation that artificially weakens SAP. When SAP is applied properly, the proposed attack is not effective. However, we show that a new use of the BPDA attack technique can still reduce the accuracy of SAP to 0.1%.