Papers
Topics
Authors
Recent
2000 character limit reached

DFB: A Data-Free, Low-Budget, and High-Efficacy Clean-Label Backdoor Attack (2308.09487v4)

Published 18 Aug 2023 in cs.CR and cs.AI

Abstract: In the domain of backdoor attacks, accurate labeling of injected data is essential for evading rudimentary detection mechanisms. This imperative has catalyzed the development of clean-label attacks, which are notably more elusive as they preserve the original labels of the injected data. Current clean-label attack methodologies primarily depend on extensive knowledge of the training dataset. However, practically, such comprehensive dataset access is often unattainable, given that training datasets are typically compiled from various independent sources. Departing from conventional clean-label attack methodologies, our research introduces DFB, a data-free, low-budget, and high-efficacy clean-label backdoor Attack. DFB is unique in its independence from training data access, requiring solely the knowledge of a specific target class. Tested on CIFAR10, Tiny-ImageNet, and TSRD, DFB demonstrates remarkable efficacy with minimal poisoning rates of just 0.1%, 0.025%, and 0.4%, respectively. These rates are significantly lower than those required by existing methods such as LC, HTBA, BadNets, and Blend, yet DFB achieves superior attack success rates. Furthermore, our findings reveal that DFB poses a formidable challenge to four established backdoor defense algorithms, indicating its potential as a robust tool in advanced clean-label attack strategies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
  2. Sentinet: Detecting localized universal attacks against deep learning systems. arXiv preprint arXiv:1812.00292, 2018.
  3. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
  4. Taiwan stop sign recognition with customize anchor. In Proceedings of the 12th International Conference on Computer Modeling and Simulation, pages 51–55, 2020.
  5. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  6. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, pages 113–125, 2019.
  7. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  8. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
  9. Towards inspecting and eliminating trojan backdoors in deep neural networks. In 2020 IEEE International Conference on Data Mining (ICDM), pages 162–171. IEEE, 2020.
  10. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  11. Learning multiple layers of features from tiny images. 2009.
  12. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
  13. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
  14. Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015.
  15. Invisible backdoor attack with sample-specific triggers. In Proceedings of the IEEE/CVF international conference on computer vision, pages 16463–16472, 2021.
  16. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International symposium on research in attacks, intrusions, and defenses, pages 273–294. Springer, 2018.
  17. Reflection backdoor: A natural backdoor attack on deep neural networks. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, pages 182–199. Springer, 2020.
  18. Multidomain active defense: Detecting multidomain backdoor poisoned samples via all-to-all decoupling training without clean datasets. Neural Networks, 2023.
  19. Dihba: Dynamic, invisible and high attack success rate boundary backdoor attack with low poison ratio. Computers & Security, 129:103212, 2023.
  20. Input-aware dynamic backdoor attack. Advances in Neural Information Processing Systems, 33:3454–3464, 2020.
  21. Wanet–imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369, 2021.
  22. Defending neural backdoors via generative distribution modeling. Advances in neural information processing systems, 32, 2019.
  23. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  24. Hidden trigger backdoor attacks. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 11957–11965, 2020.
  25. Facehack: Triggering backdoored facial recognition systems using facial characteristics. arXiv preprint arXiv:2006.11623, 2020.
  26. Marketplaces for data: an initial survey. ACM SIGMOD Record, 42(1):15–26, 2013.
  27. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
  28. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
  29. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017.
  30. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  31. The german traffic sign recognition benchmark: a multi-class classification competition. In The 2011 international joint conference on neural networks, pages 1453–1460. IEEE, 2011.
  32. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, pages 6105–6114. PMLR, 2019.
  33. Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771, 2019.
  34. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707–723. IEEE, 2019.
  35. Practical detection of trojan neural networks: Data-limited and data-free cases. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16, pages 222–238. Springer, 2020.
  36. Defending against backdoor attack on deep neural networks. arXiv preprint arXiv:2002.12162, 2020.
  37. Narcissus: A practical clean-label backdoor attack with limited information. pages 771–785, 2023.
  38. Be careful with pypi packages: You may unconsciously spread backdoor model weights. Proceedings of Machine Learning and Systems, 5, 2023.

Summary

We haven't generated a summary for this paper yet.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.