Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ungeneralizable Examples (2404.14016v1)

Published 22 Apr 2024 in cs.LG and cs.CV

Abstract: The training of contemporary deep learning models heavily relies on publicly available data, posing a risk of unauthorized access to online data and raising concerns about data privacy. Current approaches to creating unlearnable data involve incorporating small, specially designed noises, but these methods strictly limit data usability, overlooking its potential usage in authorized scenarios. In this paper, we extend the concept of unlearnable data to conditional data learnability and introduce \textbf{U}n\textbf{G}eneralizable \textbf{E}xamples (UGEs). UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers. The protector defines the authorized network and optimizes UGEs to match the gradients of the original data and its ungeneralizable version, ensuring learnability. To prevent unauthorized learning, UGEs are trained by maximizing a designated distance loss in a common feature space. Additionally, to further safeguard the authorized side from potential attacks, we introduce additional undistillation optimization. Experimental results on multiple datasets and various networks demonstrate that the proposed UGEs framework preserves data usability while reducing training performance on hacker networks, even under different types of attacks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (41)
  1. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410–14430, 2018.
  2. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
  3. Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
  4. Boosting adversarial attacks with momentum. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9185–9193, 2017.
  5. Adversarial examples make strong poisons. Advances in Neural Information Processing Systems, 34:30339–30351, 2021.
  6. Robust unlearnable examples: Protecting data privacy against adversarial learning. In International Conference on Learning Representations, 2021.
  7. Explaining and harnessing adversarial examples. International Conference on Learning and Representations, 2014.
  8. Simple black-box adversarial attacks. In International Conference on Machine Learning, pages 2484–2493. PMLR, 2019.
  9. Unlearnable examples: Making personal data unexploitable. In International Conference on Learning Representations, 2020.
  10. Defending against model stealing attacks with adaptive misinformation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 770–778, 2020.
  11. A comprehensive review on image encryption techniques. Archives of Computational Methods in Engineering, 27:15–43, 2020.
  12. Adversarial frontier stitching for remote neural network watermarking. Neural Computing and Applications, 32:9233–9244, 2020.
  13. Encryption resistant deep neural network watermarking. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3064–3068. IEEE, 2022a.
  14. Model-contrastive federated learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10713–10722, 2021.
  15. Defending against model stealing via verifying embedded external features. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 1464–1472, 2022b.
  16. Muter: Machine unlearning on adversarially trained models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4892–4902, 2023a.
  17. Mgdd: A meta generator for fast dataset distillation. In Advances in Neural Information Processing Systems, 2023.
  18. Dataset distillation via factorization. In Advances in Neural Information Processing Systems, 2022.
  19. Slimmable dataset condensation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3759–3768, 2023b.
  20. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461–11471, 2022.
  21. Undistillable: Making a nasty teacher that cannot teach students. In International Conference on Learning Representations, 2020.
  22. Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations, 2017.
  23. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  24. Transferable unlearnable examples. In The Eleventh International Conference on Learning Representations, 2022.
  25. Cuda: Convolution-based unlearnable datasets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3862–3871, 2023.
  26. Just how toxic is data poisoning? a unified benchmark for backdoor and data poisoning attacks. In International Conference on Machine Learning, pages 9389–9398. PMLR, 2021.
  27. Dawn: Dynamic adversarial watermarking of neural networks. In Proceedings of the 29th ACM International Conference on Multimedia, pages 4417–4425, 2021.
  28. Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems, 2022.
  29. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  30. Data poisoning attacks against federated learning systems. In Computer Security–ESORICS 2020: 25th European Symposium on Research in Computer Security, ESORICS 2020, Guildford, UK, September 14–18, 2020, Proceedings, Part I 25, pages 480–501. Springer, 2020.
  31. Imagen editor and editbench: Advancing and evaluating text-guided image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18359–18369, 2023.
  32. Fast image encryption algorithm based on parallel computing system. Information Sciences, 486:340–358, 2019.
  33. Deep learning for image inpainting: A survey. Pattern Recognition, 134:109046, 2023.
  34. Safe distillation box. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3117–3124, 2022.
  35. Partial network cloning. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20137–20146, 2023.
  36. Mutual-modality adversarial attack with semantic perturbation. AAAI Conference on Artificial Intelligence, 2024.
  37. Dataset distillation: A comprehensive review. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
  38. Neural tangent generalization attacks. In International Conference on Machine Learning, pages 12230–12240. PMLR, 2021.
  39. A survey on universal adversarial attack. International Joint Conference on Artificial Intelligence, 2021a.
  40. A survey on federated learning. Knowledge-Based Systems, 216:106775, 2021b.
  41. Online data poisoning attacks. In Learning for Dynamics and Control, pages 201–210. PMLR, 2020.
Citations (3)

Summary

We haven't generated a summary for this paper yet.