Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening (2308.07707v2)

Published 15 Aug 2023 in cs.LG

Abstract: Machine unlearning, the ability for a machine learning model to forget, is becoming increasingly important to comply with data privacy regulations, as well as to remove harmful, manipulated, or outdated information. The key challenge lies in forgetting specific information while protecting model performance on the remaining data. While current state-of-the-art methods perform well, they typically require some level of retraining over the retained data, in order to protect or restore model performance. This adds computational overhead and mandates that the training data remain available and accessible, which may not be feasible. In contrast, other methods employ a retrain-free paradigm, however, these approaches are prohibitively computationally expensive and do not perform on par with their retrain-based counterparts. We present Selective Synaptic Dampening (SSD), a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data. First, SSD uses the Fisher information matrix of the training and forgetting data to select parameters that are disproportionately important to the forget set. Second, SSD induces forgetting by dampening these parameters proportional to their relative importance to the forget set with respect to the wider training data. We evaluate our method against several existing unlearning methods in a range of experiments using ResNet18 and Vision Transformer. Results show that the performance of SSD is competitive with retrain-based post hoc methods, demonstrating the viability of retrain-free post hoc unlearning approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Aich, A. 2021. Elastic weight consolidation (EWC): Nuts and bolts. arXiv preprint arXiv:2105.04093.
  2. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2623–2631.
  3. Burak. 2020. Pinterest Face Recognition Dataset. www.kaggle.com/datasets/hereisburak/pins-face-recognition.
  4. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, 463–480. IEEE.
  5. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19), 267–284.
  6. Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 7210–7217.
  7. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security.
  8. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv:2010.11929.
  9. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4): 211–407.
  10. Feldman, V. 2020. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, 954–959.
  11. Making ai forget you: Data deletion in machine learning. Advances in neural information processing systems, 32.
  12. Mixed-privacy forgetting in deep networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 792–801.
  13. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9304–9312.
  14. Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
  15. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, 383–398. Springer.
  16. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 11516–11524.
  17. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030.
  18. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, 293–299. IEEE.
  19. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
  20. Membership inference attacks on machine learning: A survey. ACM Computing Surveys (CSUR), 54(11s): 1–37.
  21. Kay, S. M. 1993. Fundamentals of statistical signal processing: estimation theory. Prentice-Hall, Inc.
  22. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  23. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521–3526.
  24. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 40(7): 1–9.
  25. Optimal brain damage. Advances in neural information processing systems, 2.
  26. The long tail or the short tail: The category-specific impact of eWOM on sales distribution. Decision Support Systems, 51(3): 466–479.
  27. Continuous learning in single-incremental-task scenarios. Neural Networks, 116: 56–73.
  28. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10422–10431.
  29. Variational bayesian unlearning. Advances in Neural Information Processing Systems, 33: 16025–16036.
  30. A survey of machine unlearning. arXiv preprint arXiv:2209.02299.
  31. Deep face recognition. In BMVC 2015-Proceedings of the British Machine Vision Conference 2015. British Machine Vision Association.
  32. Pawitan, Y. 2001. In all likelihood: statistical modelling and inference using likelihood. Oxford University Press.
  33. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34: 18075–18086.
  34. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), 3–18. IEEE.
  35. On the geometry of generalization and memorization in deep neural networks. arXiv preprint arXiv:2105.14602.
  36. Deep regression unlearning. In International Conference on Machine Learning, 33921–33939. PMLR.
  37. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems.
  38. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676): 10–5555.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jack Foster (17 papers)
  2. Stefan Schoepf (16 papers)
  3. Alexandra Brintrup (50 papers)
Citations (58)