Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning Interference with Gradient Projection (2312.04095v1)

Published 7 Dec 2023 in cs.LG and cs.CV

Abstract: Recent data-privacy laws have sparked interest in machine unlearning, which involves removing the effect of specific training samples from a learnt model as if they were never present in the original training dataset. The challenge of machine unlearning is to discard information about the ``forget'' data in the learnt model without altering the knowledge about the remaining dataset and to do so more efficiently than the naive retraining approach. To achieve this, we adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU), in which the model takes steps in the orthogonal direction to the gradient subspaces deemed unimportant for the retaining dataset, so as to its knowledge is preserved. By utilizing Stochastic Gradient Descent (SGD) to update the model weights, our method can efficiently scale to any model and dataset size. We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible. Our code is available at https://github.com/hnanhtuan/projected_gradient_unlearning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Aditya Golatkar and Alessandro Achille and Stefano Soatto. Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks. In CVPR, pages 9301–9309, 2020.
  2. Differentially private learning with adaptive clipping. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
  3. Machine unlearning: Linear filtration for logit-based classifiers. 111(9):3203–3226, 2022.
  4. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159, 2021.
  5. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pages 463–480, 2015.
  6. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1897–1914. IEEE, 2022.
  7. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), pages 267–284, Aug. 2019.
  8. Label-only membership inference attacks. In International conference on machine learning, pages 1964–1974, 2021.
  9. Calibrating noise to sensitivity in private data analysis. In Shai Halevi and Tal Rabin, editors, Theory of Cryptography, pages 265–284, 2006.
  10. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci., 9(3–4):211–407, aug 2014.
  11. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, page 1322–1333, 2015.
  12. Knowledge removal in sampling-based bayesian inference. In International Conference on Learning Representations, 2022.
  13. Evaluating inexact unlearning requires revisiting forgetting, 01 2022.
  14. Mixed-privacy forgetting in deep networks. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 792–801, jun 2021.
  15. Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-Output Observations. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision – ECCV 2020, pages 383–398, 2020.
  16. Mixed differential privacy in computer vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8376–8386, June 2022.
  17. Numerical composition of differential privacy. In NeurIPS 2021, June 2021.
  18. Certified data removal from machine learning models. In Proceedings of the 37th International Conference on Machine Learning, ICML’20, 2020.
  19. Deep Residual Learning for Image Recognition. In CVPR, 2016.
  20. Approximate data deletion from machine learning models. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, volume 130 of Proceedings of Machine Learning Research, pages 2008–2016, 13–15 Apr 2021.
  21. Multiple incremental decremental learning of support vector machines. IEEE Transactions on Neural Networks, 21(7):1048–1059, 2010.
  22. Efficient two-stage model retraining for machine unlearning. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 4360–4368, 2022.
  23. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 1885–1894, 2017.
  24. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
  25. Towards unbounded machine unlearning, 02 2023.
  26. Wide neural networks of any depth evolve as linear models under gradient descent. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 2019.
  27. TRGP: Trust Region Gradient Projection for Continual Learning. 2022.
  28. A. Mantelero. The EU proposal for a general data protection regulation and the roots of the ‘right to be forgotten’. In Computer Law and Security Review, page 29(3):229–235, 2013.
  29. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6), jul 2021.
  30. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10422–10431, June 2022.
  31. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE symposium on security and privacy (SP), pages 691–706. IEEE, 2019.
  32. Variational bayesian unlearning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, 2020.
  33. Markov chain monte carlo-based machine unlearning: Unlearning what needs to be forgotten. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, ASIA CCS ’22, page 351–363, 2022.
  34. Stuart L. Pardau. THE CALIFORNIA CONSUMER PRIVACY ACT: TOWARDS A EUROPEAN-STYLE PRIVACY REGIME IN THE UNITED STATES? In Journal of Technology Law and Policy, volume 23, 2018.
  35. Incremental and decremental learning for linear support vector machines. In Joaquim Marques de Sá, Luís A. Alexandre, Włodzisław Duch, and Danilo Mandic, editors, Artificial Neural Networks – ICANN 2007, pages 209–218, 2007.
  36. White-box vs black-box: Bayes optimal strategies for membership inference. In International Conference on Machine Learning, 2019.
  37. Space: Structured compression and sharing of representational space for continual learning. IEEE Access, 9:150480–150494, 2021.
  38. Gradient projection memory for continual learning. In International Conference on Learning Representations, 2021.
  39. Remember what you want to forget: Algorithms for machine unlearning. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.
  40. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18, Los Alamitos, CA, USA, may 2017. IEEE Computer Society.
  41. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  42. Striving for Simplicity: The All Convolutional Net. In ICLR Workshop, 2015.
  43. Unrolling sgd: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroSP), pages 303–319, jun 2022.
  44. Machine Unlearning of Features and Labels. Network and Distributed System Security (NDSS), 2023.
  45. A methodology for formalizing model-inversion attacks. In 2016 IEEE 29th Computer Security Foundations Symposium (CSF), pages 355–370, 2016.
  46. Ya Le; Xuan S. Yang. Tiny imagenet visual recognition challenge. Technical report, 2015.
  47. Understanding deep learning requires rethinking generalization. In International Conference on Learning Representations, 2017.
  48. Prompt certified machine unlearning with randomized gradient smoothing and quantization. In Advances in Neural Information Processing Systems, pages 13433–13455, 2022.
Citations (11)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets