Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks (2305.09671v2)
Abstract: Deep image classification models trained on vast amounts of web-scraped data are susceptible to data poisoning - a mechanism for backdooring models. A small number of poisoned samples seen during training can severely undermine a model's integrity during inference. Existing work considers an effective defense as one that either (i) restores a model's integrity through repair or (ii) detects an attack. We argue that this approach overlooks a crucial trade-off: Attackers can increase robustness at the expense of detectability (over-poisoning) or decrease detectability at the cost of robustness (under-poisoning). In practice, attacks should remain both undetectable and robust. Detectable but robust attacks draw human attention and rigorous model evaluation or cause the model to be re-trained or discarded. In contrast, attacks that are undetectable but lack robustness can be repaired with minimal impact on model accuracy. Our research points to intrinsic flaws in current attack evaluation methods and raises the bar for all data poisoning attackers who must delicately balance this trade-off to remain robust and undetectable. To demonstrate the existence of more potent defenders, we propose defenses designed to (i) detect or (ii) repair poisoned models using a limited amount of trusted image-label pairs. Our results show that an attacker who needs to be robust and undetectable is substantially less threatening. Our defenses mitigate all tested attacks with a maximum accuracy decline of 2% using only 1% of clean data on CIFAR-10 and 2.5% on ImageNet. We demonstrate the scalability of our defenses by evaluating large vision-LLMs, such as CLIP. Attackers who can manipulate the model's parameters pose an elevated risk as they can achieve higher robustness at low detectability compared to data poisoning attackers.
- M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318.
- H. Aghakhani, D. Meng, Y.-X. Wang, C. Kruegel, and G. Vigna, “Bullseye polytope: A scalable clean-label poisoning attack with improved transferability,” in 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2021, pp. 159–178.
- B. Biggio, G. Fumera, F. Roli, and L. Didaci, “Poisoning adaptive biometric systems,” in Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, 2012, pp. 417–425.
- E. Borgnia, J. Geiping, V. Cherepanova, L. Fowl, A. Gupta, A. Ghiasi, F. Huang, M. Goldblum, and T. Goldstein, “Dp-instahide: Provably defusing poisoning and backdoor attacks with differentially private data augmentations,” arXiv preprint arXiv:2103.02079, 2021.
- M. Bücker, G. Szepannek, A. Gosiewska, and P. Biecek, “Transparency, auditability, and explainability of machine learning models in credit scoring,” Journal of the Operational Research Society, vol. 73, no. 1, pp. 70–90, 2022.
- N. Carlini, M. Jagielski, C. A. Choquette-Choo, D. Paleka, W. Pearce, H. Anderson, A. Terzis, K. Thomas, and F. Tramèr, “Poisoning web-scale training datasets is practical,” arXiv preprint arXiv:2302.10149, 2023.
- N. Carlini and A. Terzis, “Poisoning and backdooring contrastive learning,” in International Conference on Learning Representations, 2022. [Online]. Available: https://openreview.net/forum?id=iC4UHbQ01Mp
- B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, and B. Srivastava, “Detecting backdoor attacks on deep neural networks by activation clustering,” arXiv preprint arXiv:1811.03728, 2018.
- R. Chen, Z. Li, J. Li, J. Yan, and C. Wu, “On collective robustness of bagging against data poisoning,” in International Conference on Machine Learning. PMLR, 2022, pp. 3299–3319.
- X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
- B. G. Doan, E. Abbasnejad, and D. C. Ranasinghe, “Februus: Input purification defense against trojan attacks on deep neural network systems,” in Annual Computer Security Applications Conference, 2020, pp. 897–912.
- K. Doan, Y. Lao, and P. Li, “Backdoor attack with imperceptible input and latent modification,” Advances in Neural Information Processing Systems, vol. 34, pp. 18 944–18 957, 2021.
- K. Doan, Y. Lao, W. Zhao, and P. Li, “Lira: Learnable, imperceptible and robust backdoor attacks,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 11 966–11 976.
- H. M. Dolatabadi, S. Erfani, and C. Leckie, “Collider: A robust training framework for backdoor data,” in Proceedings of the Asian Conference on Computer Vision, 2022, pp. 3893–3910.
- M. Du, R. Jia, and D. Song, “Robust anomaly detection and backdoor attack detection via differential privacy,” in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/forum?id=SJx0q1rtvS
- L. Fowl, M. Goldblum, P.-y. Chiang, J. Geiping, W. Czaja, and T. Goldstein, “Adversarial examples make strong poisons,” Advances in Neural Information Processing Systems, vol. 34, pp. 30 339–30 351, 2021.
- N. Frosst, N. Papernot, and G. Hinton, “Analyzing and improving representations with the soft nearest neighbor loss,” in International conference on machine learning. PMLR, 2019, pp. 2012–2020.
- R. Gal, O. Patashnik, H. Maron, A. H. Bermano, G. Chechik, and D. Cohen-Or, “Stylegan-nada: Clip-guided domain adaptation of image generators,” ACM Transactions on Graphics (TOG), vol. 41, no. 4, pp. 1–13, 2022.
- Y. Gao, C. Xu, D. Wang, S. Chen, D. C. Ranasinghe, and S. Nepal, “Strip: A defence against trojan attacks on deep neural networks,” in Proceedings of the 35th Annual Computer Security Applications Conference, 2019, pp. 113–125.
- Y. Ge, Q. Wang, B. Zheng, X. Zhuang, Q. Li, C. Shen, and C. Wang, “Anti-distillation backdoor attacks: Backdoors can really survive in knowledge distillation,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 826–834.
- J. Geiping, L. Fowl, G. Somepalli, M. Goldblum, M. Moeller, and T. Goldstein, “What doesn’t kill you makes you robust (er): How to adversarially train against data poisoning,” arXiv preprint arXiv:2102.13624, 2021.
- S. Goldwasser, M. P. Kim, V. Vaikuntanathan, and O. Zamir, “Planting undetectable backdoors in machine learning models,” in 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2022, pp. 931–942.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
- T. Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
- W. Guo, L. Wang, X. Xing, M. Du, and D. Song, “Tabor: A highly accurate approach to inspecting and restoring trojan backdoors in ai systems,” arXiv preprint arXiv:1908.01763, 2019.
- J. Hayase, W. Kong, R. Somani, and S. Oh, “Spectre: Defending against backdoor attacks using robust statistics,” in International Conference on Machine Learning. PMLR, 2021, pp. 4129–4139.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- S. Hong, N. Carlini, and A. Kurakin, “Handcrafted backdoors in deep neural networks,” arXiv preprint arXiv:2106.04690, 2021.
- S. Hong, V. Chandrasekaran, Y. Kaya, T. Dumitraş, and N. Papernot, “On the effectiveness of mitigating data poisoning attacks with gradient shaping,” arXiv preprint arXiv:2002.11497, 2020.
- K. Huang, Y. Li, B. Wu, Z. Qin, and K. Ren, “Backdoor defense via decoupling the training process,” International Conference on Learning Representations (ICLR), 2022.
- W. R. Huang, J. Geiping, L. Fowl, G. Taylor, and T. Goldstein, “Metapoison: Practical general-purpose clean-label data poisoning,” Advances in Neural Information Processing Systems, vol. 33, pp. 12 080–12 091, 2020.
- G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V. Shankar, H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt, “Openclip,” Jul. 2021, if you use this software, please cite it as below. [Online]. Available: https://doi.org/10.5281/zenodo.5143773
- M. Jagielski, G. Severi, N. Pousette Harger, and A. Oprea, “Subpopulation data poisoning attacks,” in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 3104–3122.
- J. Jia, X. Cao, and N. Z. Gong, “Intrinsic certified robustness of bagging against data poisoning attacks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 9, 2021, pp. 7961–7969.
- J. Jia, Y. Liu, X. Cao, and N. Z. Gong, “Certified robustness of nearest neighbors against data poisoning and backdoor attacks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 9, 2022, pp. 9575–9583.
- A. Khandelwal, L. Weihs, R. Mottaghi, and A. Kembhavi, “Simple but effective: Clip embeddings for embodied ai,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 14 829–14 838.
- P. W. Koh, J. Steinhardt, and P. Liang, “Stronger data poisoning attacks break data sanitization defenses,” Machine Learning, pp. 1–47, 2022.
- A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced research),” 2009. [Online]. Available: http://www.cs.toronto.edu/~kriz/cifar.html
- R. S. S. Kumar, M. Nyström, J. Lambert, A. Marshall, M. Goertzel, A. Comissoneru, M. Swann, and S. Xia, “Adversarial machine learning-industry perspectives,” in 2020 IEEE security and privacy workshops (SPW). IEEE, 2020, pp. 69–75.
- A. Levine and S. Feizi, “Deep partition aggregation: Provable defense against general poisoning attacks,” arXiv preprint arXiv:2006.14768, 2020.
- Y. Li, X. Lyu, N. Koren, L. Lyu, B. Li, and X. Ma, “Anti-backdoor learning: Training clean models on poisoned data,” Advances in Neural Information Processing Systems, vol. 34, pp. 14 900–14 912, 2021.
- ——, “Neural attention distillation: Erasing backdoor triggers from deep neural networks,” arXiv preprint arXiv:2101.05930, 2021.
- ——, “Neural attention distillation: Erasing backdoor triggers from deep neural networks,” in International Conference on Learning Representations, 2021. [Online]. Available: https://openreview.net/forum?id=9l0K4OM-oXE
- Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, “Invisible backdoor attack with sample-specific triggers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 16 463–16 472.
- K. Liu, B. Dolan-Gavitt, and S. Garg, “Fine-pruning: Defending against backdooring attacks on deep neural networks,” in Research in Attacks, Intrusions, and Defenses: 21st International Symposium, RAID 2018, Heraklion, Crete, Greece, September 10-12, 2018, Proceedings 21. Springer, 2018, pp. 273–294.
- Y. Liu, X. Ma, J. Bailey, and F. Lu, “Reflection backdoor: A natural backdoor attack on deep neural networks,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16. Springer, 2020, pp. 182–199.
- I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.
- N. Lukas, E. Jiang, X. Li, and F. Kerschbaum, “Sok: How robust is image classification deep neural network watermarking?” in 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022, pp. 787–804.
- N. Lukas and F. Kerschbaum, “Ptw: Pivotal tuning watermarking for pre-trained image generators,” 32nd USENIX Security Symposium (USENIX Security 23), 2023.
- A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=rJzIBfZAb
- B. B. May, N. J. Tatro, P. Kumar, and N. Shnidman, “Salient conditional diffusion for defending against backdoor attacks,” arXiv preprint arXiv:2301.13862, 2023.
- S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1765–1773.
- A. Morcos, M. Raghu, and S. Bengio, “Insights on representational similarity in neural networks with canonical correlation,” Advances in Neural Information Processing Systems, vol. 31, 2018.
- A. Nguyen and A. Tran, “Wanet–imperceptible warping-based backdoor attack,” arXiv preprint arXiv:2102.10369, 2021.
- M. Noppel, L. Peter, and C. Wressnegger, “Disguising attacks with explanation-aware backdoors,” in 2023 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 2022, pp. 996–1013.
- OpenAI, “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
- R. Pang, Z. Zhang, X. Gao, Z. Xi, S. Ji, P. Cheng, and T. Wang, “Trojanzoo: Towards unified, holistic, and practical evaluation of neural backdoors,” in Proceedings of IEEE European Symposium on Security and Privacy (Euro S&P), 2022.
- N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, “Sok: Security and privacy in machine learning,” in 2018 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2018, pp. 399–414.
- X. Qi, T. Xie, Y. Li, S. Mahloujifar, and P. Mittal, “Revisiting the assumption of latent separability for backdoor defenses,” in International Conference on Learning Representations, 2023.
- H. Qiu, Y. Zeng, S. Guo, T. Zhang, M. Qiu, and B. Thuraisingham, “Deepsweep: An evaluation framework for mitigating dnn backdoor attacks using data augmentation,” in Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 2021, pp. 363–377.
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning. PMLR, 2021, pp. 8748–8763.
- A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical text-conditional image generation with clip latents,” arXiv preprint arXiv:2204.06125, 2022.
- K. Rasheed, A. Qayyum, M. Ghaly, A. Al-Fuqaha, A. Razi, and J. Qadir, “Explainable, trustworthy, and ethical machine learning for healthcare: A survey,” Computers in Biology and Medicine, p. 106043, 2022.
- D. Roich, R. Mokady, A. H. Bermano, and D. Cohen-Or, “Pivotal tuning for latent-based editing of real images,” ACM Transactions on Graphics (TOG), vol. 42, no. 1, pp. 1–13, 2022.
- S. Sagawa*, P. W. Koh*, T. B. Hashimoto, and P. Liang, “Distributionally robust neural networks,” in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/forum?id=ryxGuJrFvS
- H. Salman, S. Jain, A. Ilyas, L. Engstrom, E. Wong, and A. Madry, “When does bias transfer in transfer learning?” arXiv preprint arXiv:2207.02842, 2022.
- P. Sandoval-Segura, V. Singla, L. Fowl, J. Geiping, M. Goldblum, D. Jacobs, and T. Goldstein, “Poisons that are learned faster are more effective,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 198–205.
- L. Schulth, C. Berghoff, and M. Neu, “Detecting backdoor poisoning attacks on deep neural networks by heatmap clustering,” arXiv preprint arXiv:2204.12848, 2022.
- A. Schwarzschild, M. Goldblum, A. Gupta, J. P. Dickerson, and T. Goldstein, “Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks,” in International Conference on Machine Learning. PMLR, 2021, pp. 9389–9398.
- A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks,” Advances in neural information processing systems, vol. 31, 2018.
- S. Shan, A. N. Bhagoji, H. Zheng, and B. Y. Zhao, “Poison forensics: Traceback of data poisoning attacks in neural networks,” in 31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 3575–3592.
- J. Shen, N. Wang, Z. Wan, Y. Luo, T. Sato, Z. Hu, X. Zhang, S. Guo, Z. Zhong, K. Li et al., “Sok: On the semantic ai security in autonomous driving,” arXiv preprint arXiv:2203.05314, 2022.
- J. Steinhardt, P. W. W. Koh, and P. S. Liang, “Certified defenses for data poisoning attacks,” Advances in neural information processing systems, vol. 30, 2017.
- D. Tang, X. Wang, H. Tang, and K. Zhang, “Demon in the variant: Statistical analysis of dnns for robust backdoor contamination detection.” in USENIX Security Symposium, 2021, pp. 1541–1558.
- G. Tao, G. Shen, Y. Liu, S. An, Q. Xu, S. Ma, P. Li, and X. Zhang, “Better trigger inversion optimization in backdoor scanning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 368–13 378.
- B. Tran, J. Li, and A. Madry, “Spectral signatures in backdoor attacks,” in Neural Information Processing Systems, 2018.
- ——, “Spectral signatures in backdoor attacks,” Advances in neural information processing systems, vol. 31, 2018.
- A. Turner, D. Tsipras, and A. Madry, “Clean-label backdoor attacks,” 2018.
- S. Udeshi, S. Peng, G. Woo, L. Loh, L. Rawshan, and S. Chattopadhyay, “Model agnostic defence against backdoor attacks in machine learning,” IEEE Transactions on Reliability, vol. 71, no. 2, pp. 880–895, 2022.
- B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019, pp. 707–723.
- Z. Wang, H. Ding, J. Zhai, and S. Ma, “Training with more confidence: Mitigating injected and natural backdoors during training,” in Advances in Neural Information Processing Systems.
- ——, “Training with more confidence: Mitigating injected and natural backdoors during training,” Advances in Neural Information Processing Systems, vol. 35, pp. 36 396–36 410, 2022.
- M. Weber, X. Xu, B. Karlas, C. Zhang, and B. Li, “Rab: Provable robustness against backdoor attacks,” ArXiv, vol. abs/2003.08904, 2020.
- E. Wenger, R. Bhattacharjee, A. N. Bhagoji, J. Passananti, E. Andere, H. Zheng, and B. Zhao, “Finding naturally occurring physical backdoors in image datasets,” Advances in Neural Information Processing Systems, vol. 35, pp. 22 103–22 116, 2022.
- B. Wu, H. Chen, M. Zhang, Z. Zhu, S. Wei, D. Yuan, C. Shen, and H. Zha, “Backdoorbench: A comprehensive benchmark of backdoor learning,” arXiv preprint arXiv:2206.12654, 2022.
- D. Wu and Y. Wang, “Adversarial neuron pruning purifies backdoored deep models,” Advances in Neural Information Processing Systems, vol. 34, pp. 16 913–16 925, 2021.
- C. Xiang, S. Mahloujifar, and P. Mittal, “{{\{{PatchCleanser}}\}}: Certifiably robust defense against adversarial patches for any image classifier,” in 31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 2065–2082.
- Y. Yao, H. Li, H. Zheng, and B. Y. Zhao, “Latent backdoor attacks on deep neural networks,” in Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 2041–2055.
- R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595.
- Y. Zhang, A. Albarghouthi, and L. D’Antoni, “Bagflip: A certified defense against data poisoning,” arXiv preprint arXiv:2205.13634, 2022.
- C. Zhu, W. R. Huang, H. Li, G. Taylor, C. Studer, and T. Goldstein, “Transferable clean-label poisoning attacks on deep neural nets,” in International Conference on Machine Learning. PMLR, 2019, pp. 7614–7623.