Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SSTA: Salient Spatially Transformed Attack (2312.07258v1)

Published 12 Dec 2023 in cs.CV and eess.IV

Abstract: Extensive studies have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial attacks, which brings a huge security risk to the further application of DNNs, especially for the AI models developed in the real world. Despite the significant progress that has been made recently, existing attack methods still suffer from the unsatisfactory performance of escaping from being detected by naked human eyes due to the formulation of adversarial example (AE) heavily relying on a noise-adding manner. Such mentioned challenges will significantly increase the risk of exposure and result in an attack to be failed. Therefore, in this paper, we propose the Salient Spatially Transformed Attack (SSTA), a novel framework to craft imperceptible AEs, which enhance the stealthiness of AEs by estimating a smooth spatial transform metric on a most critical area to generate AEs instead of adding external noise to the whole image. Compared to state-of-the-art baselines, extensive experiments indicated that SSTA could effectively improve the imperceptibility of the AEs while maintaining a 100\% attack success rate.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (24)
  1. “Adversarial attacks and defenses in images, graphs and text: A review,” International Journal of Automation and Computing, vol. 17, no. 2, pp. 151–178, 2020.
  2. “Towards evaluating the robustness of neural networks,” in S&P, 2017.
  3. “Towards deep learning models resistant to adversarial attacks,” in ICLR, 2018.
  4. “Spatially transformed adversarial examples,” in ICLR, 2018.
  5. “Imperceptible adversarial examples by spatial chroma-shift,” in ADVM, 2021, pp. 8–14.
  6. “Spatial transformer networks,” in NeurIPS, 2015, pp. 2017–2025.
  7. “TRACER: extreme attention guided salient object tracing network,” in AAAI, 2022, pp. 12993–12994.
  8. “Visual attention detection in video sequences using spatiotemporal cues,” in ACM MM, 2006, pp. 815–824.
  9. “Frequency-tuned salient region detection,” in CVPR, 2009, pp. 1597–1604.
  10. “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in ICCV, 2017, pp. 618–626.
  11. “Generate adversarial examples by spatially perturbing on the meaningful area,” Pattern Recognition Letters, vol. 125, pp. 632–638, 2019.
  12. “U22{}^{\mbox{2}}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT-net: Going deeper with nested u-structure for salient object detection,” Pattern Recognition, vol. 106, pp. 107404, 2020.
  13. “Very deep convolutional networks for large-scale image recognition,” in ICLR, 2015.
  14. “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778.
  15. “Densely connected convolutional networks,” in CVPR, 2017, pp. 2261–2269.
  16. “Swin transformer: Hierarchical vision transformer using shifted windows,” in ICCV, 2021, pp. 9992–10002.
  17. “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021.
  18. “Advdrop: Adversarial attack to dnns by dropping information,” in ICCV, 2021, pp. 7486–7495.
  19. “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018, pp. 586–595.
  20. “Image quality assessment: Unifying structure and texture similarity,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 5, pp. 2567–2581, 2022.
  21. “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81–84, 2002.
  22. Jun Li, “Spatial quality evaluation of fusion of different resolution images,” International Archives of Photogrammetry and Remote Sensing, vol. 33, 09 2000.
  23. “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3440–3451, 2006.
  24. “Image information and visual quality,” in ICASSP, 2004, pp. 709–712.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Renyang Liu (12 papers)
  2. Wei Zhou (311 papers)
  3. Sixin Wu (1 paper)
  4. Jun Zhao (469 papers)
  5. Kwok-Yan Lam (74 papers)

Summary

We haven't generated a summary for this paper yet.