Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
87 tokens/sec
Gemini 2.5 Pro Premium
36 tokens/sec
GPT-5 Medium
31 tokens/sec
GPT-5 High Premium
39 tokens/sec
GPT-4o
95 tokens/sec
DeepSeek R1 via Azure Premium
91 tokens/sec
GPT OSS 120B via Groq Premium
460 tokens/sec
Kimi K2 via Groq Premium
219 tokens/sec
2000 character limit reached

NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise (2409.02251v1)

Published 3 Sep 2024 in cs.CV, cs.AI, cs.CR, and cs.LG

Abstract: Backdoor attacks pose a significant threat when using third-party data for deep learning development. In these attacks, data can be manipulated to cause a trained model to behave improperly when a specific trigger pattern is applied, providing the adversary with unauthorized advantages. While most existing works focus on designing trigger patterns in both visible and invisible to poison the victim class, they typically result in a single targeted class upon the success of the backdoor attack, meaning that the victim class can only be converted to another class based on the adversary predefined value. In this paper, we address this issue by introducing a novel sample-specific multi-targeted backdoor attack, namely NoiseAttack. Specifically, we adopt White Gaussian Noise (WGN) with various Power Spectral Densities (PSD) as our underlying triggers, coupled with a unique training strategy to execute the backdoor attack. This work is the first of its kind to launch a vision backdoor attack with the intent to generate multiple targeted classes with minimal input configuration. Furthermore, our extensive experimental results demonstrate that NoiseAttack can achieve a high attack success rate against popular network architectures and datasets, as well as bypass state-of-the-art backdoor detection methods. Our source code and experiments are available at https://github.com/SiSL-URI/NoiseAttack/tree/main.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (53)
  1. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In 27th USENIX Security Symposium (USENIX Security 18), pages 1615–1631, Baltimore, MD, 2018.
  2. Deep speech 2 : End-to-end speech recognition in english and mandarin. In Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 173–182, 2016.
  3. {{\{{T-Miner}}\}}: A generative approach to defend against trojan attacks on {{\{{DNN-based}}\}} text classification. In 30th USENIX Security Symposium (USENIX Security 21), pages 2255–2272, 2021.
  4. Deepinspect: A black-box trojan detection and mitigation framework for deep neural networks. In IJCAI, volume 2, page 8, 2019.
  5. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
  6. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
  7. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
  8. Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
  9. Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141–142, 2012.
  10. Backdoor attack with imperceptible input and latent modification. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 18944–18957, 2021.
  11. Lira: Learnable, imperceptible and robust backdoor attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 11966–11976, October 2021.
  12. Fiba: Frequency-injection based backdoor attack in medical image analysis, 2022.
  13. Design and evaluation of a multi-domain trojan detection method on deep neural networks. IEEE Transactions on Dependable and Secure Computing, 19(4):2349–2364, 2021.
  14. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, pages 113–125, 2019.
  15. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th annual computer security applications conference, pages 113–125, 2019.
  16. A dual stealthy backdoor: From both spatial and frequency perspectives, 2023.
  17. Arturo Geigel. Neural network trojan. J. Comput. Secur., 21(2):191–232, mar 2013.
  18. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
  19. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
  20. Resnet 50. Convolutional neural networks with swift for tensorflow: image recognition and dataset categorization, pages 63–72, 2021.
  21. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 40(7):1–9, 2010.
  22. Weight poisoning attacks on pre-trained models, 2020.
  23. Neural attention distillation: Erasing backdoor triggers from deep neural networks, 2021.
  24. Invisible backdoor attack with sample-specific triggers, 2021.
  25. Microsoft coco: Common objects in context. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014.
  26. Fine-pruning: Defending against backdooring attacks on deep neural networks, 2018.
  27. Neural trojans. In 2017 IEEE International Conference on Computer Design (ICCD), pages 45–48. IEEE, 2017.
  28. Quantization backdoors to deep learning commercial frameworks, 2023.
  29. The fft on a gpu. In Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware, page 112–119, 2003.
  30. Speech recognition using deep neural networks: A systematic review. IEEE Access, 7:19143–19165, 2019.
  31. Wanet – imperceptible warping-based backdoor attack, 2021.
  32. Wanet–imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369, 2021.
  33. You only look once: Unified, real-time object detection, 2016.
  34. Faster r-cnn: Towards real-time object detection with region proposal networks, 2016.
  35. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
  36. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision, 128(2):336–359, October 2019.
  37. Alex Sherstinsky. Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. Physica D: Nonlinear Phenomena, 404:132306, March 2020.
  38. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  39. Attention is all you need, 2023.
  40. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707–723, 2019.
  41. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE symposium on security and privacy (SP), pages 707–723. IEEE, 2019.
  42. Backdoor attacks against transfer learning with pre-trained deep learning models. IEEE Transactions on Services Computing, 15(3):1526–1539, May 2022.
  43. An invisible black-box backdoor attack through frequency domain. In Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XIII, page 396–413, 2022.
  44. Rethinking the reverse-engineering of trojan triggers, 2022.
  45. Unicorn: A unified backdoor trigger inversion framework, 2023.
  46. Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning, 2022.
  47. Finding naturally occurring physical backdoors in image datasets. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 22103–22116, 2022.
  48. Backdoor attacks against deep learning systems in the physical world, 2021.
  49. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
  50. Watermarking graph neural networks based on backdoor attacks, 2022.
  51. Latent backdoor attacks on deep neural networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, page 2041–2055, 2019.
  52. Adversarial unlearning of backdoors via implicit hypergradient, 2022.
  53. Rethinking the backdoor attacks’ triggers: A frequency perspective, 2022.
Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube