Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors (2402.12626v1)

Published 20 Feb 2024 in cs.LG and cs.CR

Abstract: Machine learning models have achieved great success in supervised learning tasks for end-to-end training, which requires a large amount of labeled data that is not always feasible. Recently, many practitioners have shifted to self-supervised learning methods that utilize cheap unlabeled data to learn a general feature extractor via pre-training, which can be further applied to personalized downstream tasks by simply training an additional linear layer with limited labeled data. However, such a process may also raise concerns regarding data poisoning attacks. For instance, indiscriminate data poisoning attacks, which aim to decrease model utility by injecting a small number of poisoned data into the training set, pose a security risk to machine learning models, but have only been studied for end-to-end supervised learning. In this paper, we extend the exploration of the threat of indiscriminate attacks on downstream tasks that apply pre-trained feature extractors. Specifically, we propose two types of attacks: (1) the input space attacks, where we modify existing attacks to directly craft poisoned data in the input space. However, due to the difficulty of optimization under constraints, we further propose (2) the feature targeted attacks, where we mitigate the challenge with three stages, firstly acquiring target parameters for the linear head; secondly finding poisoned features by treating the learned feature representations as a dataset; and thirdly inverting the poisoned features back to the input space. Our experiments examine such attacks in popular downstream tasks of fine-tuning on the same dataset and transfer learning that considers domain adaptation. Empirical results reveal that transfer learning is more vulnerable to our attacks. Additionally, input space attacks are a strong threat if no countermeasures are posed, but are otherwise weaker than feature targeted attacks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (51)
  1. “Bullseye polytope: A scalable clean-label poisoning attack with improved transferability” In IEEE European Symposium on Security and Privacy (EuroS&P), 2021, pp. 159–178 URL: https://doi.org/10.1109/EuroSP51992.2021.00021
  2. “Robustly-reliable learners under poisoning attacks” In Proceedings of Thirty Fifth Conference on Learning Theory, 2022, pp. 4498–4534 URL: https://proceedings.mlr.press/v178/balcan22a.html
  3. “Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer” In Jama 318.22 American Medical Association, 2017, pp. 2199–2210
  4. Battista Biggio, Blaine Nelson and Pavel Laskov “Poisoning attacks against support vector machines” In Proceedings of the 29th International Conference on Machine Learning (ICML), 2012, pp. 1467–1474 URL: https://icml.cc/2012/papers/880.pdf
  5. “Poisoning and Backdooring Contrastive Learning” In International Conference on Learning Representations, 2021
  6. “A Simple Framework for Contrastive Learning of Visual Representations” In ICML, 2020 URL: http://proceedings.mlr.press/v119/chen20j.html
  7. “Big Self-Supervised Models are Strong Semi-Supervised Learners” In NeurIPS, 2020
  8. Xinlei Chen, Saining Xie and Kaiming He “An empirical study of training self-supervised vision transformers” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 9640–9649
  9. “Targeted backdoor attacks on deep learning systems using data poisoning” arXiv:1712.05526, 2017 URL: https://arxiv.org/abs/1712.05526
  10. “ImageNet: A large-scale hierarchical image database” In CVPR, 2009 IEEE
  11. “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” In International Conference on Learning Representations, 2020
  12. “Preventing unauthorized use of proprietary data: Poisoning for secure dataset release” arXiv preprint arXiv:2103.02683, 2021 URL: https://arxiv.org/pdf/2103.02683.pdf
  13. “Adversarial Examples Make Strong Poisons” In Advances in Neural Information Processing Systems, 2021, pp. 30339–30351 URL: https://proceedings.neurips.cc/paper/2021/file/fe87435d12ef7642af67d9bc82a8b3cd-Paper.pdf
  14. “Robust unlearnable examples: Protecting data privacy against adversarial learning” In International Conference on Learning Representations, 2021 URL: https://openreview.net/forum?id=baUQQPwQiAg
  15. “The Pile: An 800GB Dataset of Diverse Text for Language Modeling” arXiv preprint arXiv:2101.00027, 2020 URL: https://arxiv.org/abs/2101.00027
  16. Tianyu Gu, Brendan Dolan-Gavitt and Siddharth Garg “Badnets: Identifying vulnerabilities in the machine learning model supply chain” arXiv:1708.06733, 2017 URL: https://arxiv.org/abs/1708.06733
  17. “Practical Poisoning Attacks on Neural Networks” In European Conference on Computer Vision, 2020, pp. 142–158 URL: https://doi.org/10.1007/978-3-030-58583-9%5C%5F9
  18. Hao He, Kaiwen Zha and Dina Katabi “Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning” In The Eleventh International Conference on Learning Representations, 2022
  19. “Momentum contrast for unsupervised visual representation learning” In CVPR, 2020
  20. “Deep Residual Learning for Image Recognition” In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778 URL: https://doi.org/10.1109/CVPR.2016.90
  21. “Unlearnable Examples: Making Personal Data Unexploitable” In International Conference on Learning Representations, 2021 URL: https://openreview.net/forum?id=iAmZUo0DxC0
  22. “Boosting Contrastive Self-Supervised Learning with False Negative Cancellation” In arXiv preprint arXiv:2011.11765, 2020
  23. “Analysis of Loss Functions for Image Reconstruction Using Convolutional Autoencoder” In International Conference on Computer Vision and Image Processing, 2021, pp. 338–349 Springer
  24. Pang Wei Koh and Percy Liang “Understanding black-box predictions via influence functions” In Proceedings of the 34th International Conference on Machine Learning (ICML), 2017, pp. 1885–1894 URL: https://proceedings.mlr.press/v70/koh17a/koh17a.pdf
  25. Pang Wei Koh, Jacob Steinhardt and Percy Liang “Stronger Data Poisoning Attacks Break Data Sanitization Defenses” In Machine Learning 111, 2022, pp. 1–47 URL: https://doi.org/10.1007/s10994-021-06119-y
  26. Alex Krizhevsky “Learning multiple layers of features from tiny images” tech. report, 2009 URL: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
  27. “Adversarial machine learning-industry perspectives” In IEEE Security and Privacy Workshops (SPW), 2020, pp. 69–75 URL: https://doi.org/10.1109/SPW50608.2020.00028
  28. “Mining adversarial patterns via regularized loss minimization” In Machine learning 81.1, 2010, pp. 69–83 URL: https://link.springer.com/article/10.1007/s10994-010-5199-2
  29. Yiwei Lu, Gautam Kamath and Yaoliang Yu “Indiscriminate Data Poisoning Attacks on Neural Networks” In Transactions on Machine Learning Research, 2022 URL: https://openreview.net/forum?id=x4hmIsWu7e
  30. Yiwei Lu, Gautam Kamath and Yaoliang Yu “Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks” In Proceedings of the 40th International Conference on Machine Learning, 2023 URL: https://proceedings.mlr.press/v202/lu23e.html
  31. “f𝑓fitalic_f-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning” In Transactions on Machine Learning Research, 2023
  32. Lingjuan Lyu, Han Yu and Qiang Yang “Threats to federated learning: A survey” arXiv preprint arXiv:2003.02133, 2020 URL: https://arxiv.org/abs/2003.02133
  33. “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization” In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec), 2017, pp. 27–38 URL: https://doi.org/10.1145/3128572.3140451
  34. “Exploiting machine learning to subvert your spam filter.” In LEET 8, 2008, pp. 1–9 URL: https://www.usenix.org/legacy/event/leet08/tech/full%5C%5Fpapers/nelson/nelson.pdf
  35. Aaron Oord, Yazhe Li and Oriol Vinyals “Representation learning with contrastive predictive coding” In arXiv preprint arXiv:1807.03748, 2018
  36. “Learning transferable visual models from natural language supervision” In International conference on machine learning, 2021, pp. 8748–8763 PMLR
  37. Olaf Ronneberger, Philipp Fischer and Thomas Brox “U-net: Convolutional networks for biomedical image segmentation” In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 2015, pp. 234–241 Springer
  38. Aniruddha Saha, Akshayvarun Subramanya and Hamed Pirsiavash “Hidden trigger backdoor attacks” In Proceedings of the AAAI Conference on Artificial Intelligence, 2020 URL: https://doi.org/10.1609/aaai.v34i07.6871
  39. “Autoregressive Perturbations for Data Poisoning” In Advances in Neural Information Processing Systems, 2022 URL: https://openreview.net/forum?id=1vusesyN7E
  40. “Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks” In Advances in Neural Information Processing Systems (NeurIPS), 2018, pp. 6103–6113 URL: https://proceedings.neurips.cc/paper/2018/file/22722a343513ed45f14905eb07621686-Paper.pdf
  41. “Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning” In IEEE Symposium on Security and Privacy (SP), 2022, pp. 1354–1371 URL: https://doi.org/10.1109/SP46214.2022.9833647
  42. “Exploring the vulnerability of deep neural networks: A study of parameter corruption” In Proceedings of the AAAI Conference on Artificial Intelligence, 2020 URL: https://ojs.aaai.org/index.php/AAAI/article/view/17385
  43. “Model-targeted poisoning attacks with provable convergence” In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021, pp. 10000–10010 URL: http://proceedings.mlr.press/v139/suya21a/suya21a.pdf
  44. “Intriguing properties of neural networks” International Conference on Learning Representation, 2014 URL: https://arxiv.org/abs/1312.6199
  45. Brandon Tran, Jerry Li and Aleksander Madry “Spectral Signatures in Backdoor Attacks” In Advances in Neural Information Processing Systems (NeurIPS), 2018 URL: https://papers.nips.cc/paper/2018/hash/280cf18baf4311c92aa5a042336587d3-Abstract.html
  46. Tuan Truong, Sadegh Mohammadi and Matthias Lenga “How transferable are self-supervised features in medical image classification tasks?” In Machine Learning for Health, 2021, pp. 54–74 PMLR
  47. “Rotation equivariant CNNs for digital pathology” In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11, 2018, pp. 210–218 Springer
  48. Jane Wakefield “Microsoft chatbot is taught to swear on Twitter” In BBC News, 2016 URL: https://www.bbc.com/news/technology-35890188
  49. Zhou Wang, Eero P Simoncelli and Alan C Bovik “Multiscale structural similarity for image quality assessment” In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 2, 2003, pp. 1398–1402 Ieee
  50. “Availability Attacks Create Shortcuts” In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 2367–2376 URL: https://doi.org/10.1145/3534678.3539241
  51. “Transferable clean-label poisoning attacks on deep neural nets” In International Conference on Machine Learning, 2019, pp. 7614–7623 URL: https://proceedings.mlr.press/v97/zhu19a.html
Citations (6)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com