Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise (2311.13091v2)

Published 22 Nov 2023 in cs.LG, cs.CR, and cs.CV

Abstract: The open source of large amounts of image data promotes the development of deep learning techniques. Along with this comes the privacy risk of these open-source image datasets being exploited by unauthorized third parties to train deep learning models for commercial or illegal purposes. To avoid the abuse of public data, a poisoning-based technique, the unlearnable example, is proposed to significantly degrade the generalization performance of models by adding a kind of imperceptible noise to the data. To further enhance its robustness against adversarial training, existing works leverage iterative adversarial training on both the defensive noise and the surrogate model. However, it still remains unknown whether the robustness of unlearnable examples primarily comes from the effect of enhancement in the surrogate model or the defensive noise. Observing that simply removing the adversarial noise on the training process of the defensive noise can improve the performance of robust unlearnable examples, we identify that solely the surrogate model's robustness contributes to the performance. Furthermore, we found a negative correlation exists between the robustness of defensive noise and the protection performance, indicating defensive noise's instability issue. Motivated by this, to further boost the robust unlearnable example, we introduce stable error-minimizing noise (SEM), which trains the defensive noise against random perturbation instead of the time-consuming adversarial perturbation to improve the stability of defensive noise. Through extensive experiments, we demonstrate that SEM achieves a new state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet Subset in terms of both effectiveness and efficiency. The code is available at https://github.com/liuyixin-louis/Stable-Unlearnable-Example.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Synthesizing robust adversarial examples. In International conference on machine learning, 284–293. PMLR.
  2. Poisoning attacks against support vector machines. In ICML.
  3. A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226.
  4. Bert: Pre-training of deep bidirectional transformers for language understanding.
  5. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
  6. Learning to confuse: generating training time adversarial data with auto-encoder. In NeurIPS.
  7. Preventing unauthorized use of proprietary data: Poisoning for secure dataset release. arXiv preprint arXiv:2103.02683.
  8. Adversarial Examples Make Strong Poisons. In NeurIPS.
  9. Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning. In International Conference on Learning Representations.
  10. Triggerless Backdoor Attack for NLP Tasks with Clean Labels. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2942–2952.
  11. Witches’ Brew: Industrial Scale Data Poisoning via Gradient Matching. In International Conference on Learning Representations.
  12. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733.
  13. Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand. In Thirty-seventh Conference on Neural Information Processing Systems.
  14. Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning. In The Eleventh International Conference on Learning Representations.
  15. Deep residual learning for image recognition. In CVPR.
  16. How photos of your kids are powering surveillance technology. The New York Times.
  17. Densely connected convolutional networks. In CVPR.
  18. Unlearnable Examples: Making Personal Data Unexploitable. In International Conference on Learning Representations.
  19. Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks. In NeurIPS.
  20. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
  21. Krizhevsky, A.; et al. 2009. Learning multiple layers of features from tiny images.
  22. Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection. Advances in Neural Information Processing Systems, 35: 13238–13250.
  23. {{\{{UnGANable}}\}}: Defending Against {{\{{GAN-based}}\}} Face Manipulation. In 32nd USENIX Security Symposium (USENIX Security 23), 7213–7230.
  24. Friendly noise against adversarial noise: a powerful defense against data poisoning attack. Advances in Neural Information Processing Systems, 35: 11947–11959.
  25. GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation. arXiv preprint arXiv:2310.07100.
  26. Toward Robust Imperceptible Perturbation against Unauthorized Text-to-image Diffusion-based Synthesis. arXiv preprint arXiv:2311.13127.
  27. Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation. arXiv preprint arXiv:2303.02568.
  28. Securing Biomedical Images from Unauthorized Training with Anti-Learning Perturbation. arXiv preprint arXiv:2303.02559.
  29. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR.
  30. Towards poisoning of deep learning algorithms with back-gradient optimization. In ACM Workshop on Artificial Intelligence and Security.
  31. A data-driven approach to cleaning large face datasets. In 2014 IEEE International Conference on Image Processing (ICIP), 343–347.
  32. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3): 211–252.
  33. Raising the cost of malicious ai-powered image editing. arXiv preprint arXiv:2302.06588.
  34. Poison frogs! targeted clean-label poisoning attacks on neural networks. In NeurIPS.
  35. Very deep convolutional networks for large-scale image recognition. In ICLR.
  36. Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch. Advances in Neural Information Processing Systems, 35: 19165–19178.
  37. Adversarial attack and defense on graph data: A survey. IEEE Transactions on Knowledge and Data Engineering.
  38. Coprotector: Protect open-source code against unauthorized training usage with data poisoning. In Proceedings of the ACM Web Conference 2022, 652–660.
  39. Better safe than sorry: Preventing delusive adversaries with adversarial training. Advances in Neural Information Processing Systems, 34: 16209–16225.
  40. Attention is all you need. Advances in neural information processing systems, 30.
  41. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7794–7803.
  42. Understanding Backdoor Attacks through the Adaptability Hypothesis. In Proc. International Conference on Machine Learning.
  43. Recursive implementation of the Gaussian filter. Signal processing, 44(2): 139–151.
  44. Availability Attacks Create Shortcuts. arXiv:2111.00898.
  45. Neural Tangent Generalization Attacks. In ICML.
  46. Wide Residual Networks. In Procedings of the British Machine Vision Conference 2016. British Machine Vision Association.
  47. Narcissus: A practical clean-label backdoor attack with limited information. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 771–785.
  48. Unlearnable clusters: Towards label-agnostic unlearnable examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3984–3993.
  49. CLPA: Clean-label poisoning availability attacks using generative adversarial nets. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 9162–9170.
  50. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. arXiv preprint arXiv:2302.09419.
Citations (4)

Summary

We haven't generated a summary for this paper yet.