Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Loss-Free Machine Unlearning (2402.19308v1)

Published 29 Feb 2024 in cs.LG and cs.CV

Abstract: We present a machine unlearning approach that is both retraining- and label-free. Most existing machine unlearning approaches require a model to be fine-tuned to remove information while preserving performance. This is computationally expensive and necessitates the storage of the whole dataset for the lifetime of the model. Retraining-free approaches often utilise Fisher information, which is derived from the loss and requires labelled data which may not be available. Thus, we present an extension to the Selective Synaptic Dampening algorithm, substituting the diagonal of the Fisher information matrix for the gradient of the l2 norm of the model output to approximate sensitivity. We evaluate our method in a range of experiments using ResNet18 and Vision Transformer. Results show our label-free method is competitive with existing state-of-the-art approaches.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European conference on computer vision (ECCV), pp.  139–154, 2018.
  2. Burak. Pinterest face recognition dataset. www.kaggle.com/datasets/hereisburak/pins-face-recognition, 2020.
  3. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pp.  463–480. IEEE, 2015.
  4. Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp.  7210–7217, 2023a.
  5. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, 2023b.
  6. Fast machine unlearning without retraining through selective synaptic dampening. arXiv preprint arXiv:2308.07707, 2023.
  7. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  8. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  11516–11524, 2021.
  9. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 40(7):1–9, 2010.
  10. Towards unbounded machine unlearning. arXiv preprint arXiv:2302.09880, 2023.
  11. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pp.  3–18. IEEE, 2017.
  12. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Jack Foster (17 papers)
  2. Stefan Schoepf (16 papers)
  3. Alexandra Brintrup (50 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com