SSTA: Salient Spatially Transformed Attack (2312.07258v1)
Abstract: Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks, which brings a huge security risk to the further application of DNNs, especially for the AI models developed in the real world. Despite the significant progress that has been made recently, existing attack methods still suffer from the unsatisfactory performance of escaping from being detected by naked human eyes due to the formulation of adversarial example (AE) heavily relying on a noise-adding manner. Such mentioned challenges will significantly increase the risk of exposure and result in an attack to be failed. Therefore, in this paper, we propose the Salient Spatially Transformed Attack (SSTA), a novel framework to craft imperceptible AEs, which enhance the stealthiness of AEs by estimating a smooth spatial transform metric on a most critical area to generate AEs instead of adding external noise to the whole image. Compared to state-of-the-art baselines, extensive experiments indicated that SSTA could effectively improve the imperceptibility of the AEs while maintaining a 100\% attack success rate.
- “Adversarial attacks and defenses in images, graphs and text: A review,” International Journal of Automation and Computing, vol. 17, no. 2, pp. 151–178, 2020.
- “Towards evaluating the robustness of neural networks,” in S&P, 2017.
- “Towards deep learning models resistant to adversarial attacks,” in ICLR, 2018.
- “Spatially transformed adversarial examples,” in ICLR, 2018.
- “Imperceptible adversarial examples by spatial chroma-shift,” in ADVM, 2021, pp. 8–14.
- “Spatial transformer networks,” in NeurIPS, 2015, pp. 2017–2025.
- “TRACER: extreme attention guided salient object tracing network,” in AAAI, 2022, pp. 12993–12994.
- “Visual attention detection in video sequences using spatiotemporal cues,” in ACM MM, 2006, pp. 815–824.
- “Frequency-tuned salient region detection,” in CVPR, 2009, pp. 1597–1604.
- “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in ICCV, 2017, pp. 618–626.
- “Generate adversarial examples by spatially perturbing on the meaningful area,” Pattern Recognition Letters, vol. 125, pp. 632–638, 2019.
- “U22{}^{\mbox{2}}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT-net: Going deeper with nested u-structure for salient object detection,” Pattern Recognition, vol. 106, pp. 107404, 2020.
- “Very deep convolutional networks for large-scale image recognition,” in ICLR, 2015.
- “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
- “Densely connected convolutional networks,” in CVPR, 2017, pp. 2261–2269.
- “Swin transformer: Hierarchical vision transformer using shifted windows,” in ICCV, 2021, pp. 9992–10002.
- “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021.
- “Advdrop: Adversarial attack to dnns by dropping information,” in ICCV, 2021, pp. 7486–7495.
- “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018, pp. 586–595.
- “Image quality assessment: Unifying structure and texture similarity,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 5, pp. 2567–2581, 2022.
- “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81–84, 2002.
- Jun Li, “Spatial quality evaluation of fusion of different resolution images,” International Archives of Photogrammetry and Remote Sensing, vol. 33, 09 2000.
- “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, 2006.
- “Image information and visual quality,” in ICASSP, 2004, pp. 709–712.
- Renyang Liu (12 papers)
- Wei Zhou (311 papers)
- Sixin Wu (1 paper)
- Jun Zhao (469 papers)
- Kwok-Yan Lam (74 papers)