Loss-Free Machine Unlearning (2402.19308v1)
Abstract: We present a machine unlearning approach that is both retraining- and label-free. Most existing machine unlearning approaches require a model to be fine-tuned to remove information while preserving performance. This is computationally expensive and necessitates the storage of the whole dataset for the lifetime of the model. Retraining-free approaches often utilise Fisher information, which is derived from the loss and requires labelled data which may not be available. Thus, we present an extension to the Selective Synaptic Dampening algorithm, substituting the diagonal of the Fisher information matrix for the gradient of the l2 norm of the model output to approximate sensitivity. We evaluate our method in a range of experiments using ResNet18 and Vision Transformer. Results show our label-free method is competitive with existing state-of-the-art approaches.
- Memory aware synapses: Learning what (not) to forget. In Proceedings of the European conference on computer vision (ECCV), pp. 139–154, 2018.
- Burak. Pinterest face recognition dataset. www.kaggle.com/datasets/hereisburak/pins-face-recognition, 2020.
- Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pp. 463–480. IEEE, 2015.
- Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 7210–7217, 2023a.
- Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, 2023b.
- Fast machine unlearning without retraining through selective synaptic dampening. arXiv preprint arXiv:2308.07707, 2023.
- Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 11516–11524, 2021.
- Convolutional deep belief networks on cifar-10. Unpublished manuscript, 40(7):1–9, 2010.
- Towards unbounded machine unlearning. arXiv preprint arXiv:2302.09880, 2023.
- Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pp. 3–18. IEEE, 2017.
- Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
- Jack Foster (17 papers)
- Stefan Schoepf (16 papers)
- Alexandra Brintrup (50 papers)