Contrastive Unlearning: A Contrastive Approach to Machine Unlearning (2401.10458v1)
Abstract: Machine unlearning aims to eliminate the influence of a subset of training samples (i.e., unlearning samples) from a trained model. Effectively and efficiently removing the unlearning samples without negatively impacting the overall model performance is still challenging. In this paper, we propose a contrastive unlearning framework, leveraging the concept of representation learning for more effective unlearning. It removes the influence of unlearning samples by contrasting their embeddings against the remaining samples so that they are pushed away from their original classes and pulled toward other classes. By directly optimizing the representation space, it effectively removes the influence of unlearning samples while maintaining the representations learned from the remaining samples. Experiments on a variety of datasets and models on both class unlearning and sample unlearning showed that contrastive unlearning achieves the best unlearning effects and efficiency with the lowest performance loss compared with the state-of-the-art algorithms.
- A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, pages 233–242. PMLR, 2017. ISSN: 2640-3498.
- Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
- Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pages 463–480. IEEE, 2015.
- A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, pages 1597–1607. PMLR, 2020. ISSN: 2640-3498.
- Boundary unlearning: Rapid forgetting of deep networks via shifting the decision boundary. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7766–7775, 2023.
- DUCK: Distance-based Unlearning via Centroid Kinematics, December 2023. arXiv:2312.02052 [cs].
- On the separability of classes with the cross-entropy loss function, 2024.
- Eternal sunshine of the spotless net: Selective forgetting in deep networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9301–9309. IEEE, 2020.
- An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks. CoRR, December 2013.
- Dissecting supervised contrastive learning. In Proceedings of the 38th International Conference on Machine Learning, pages 3821–3830. PMLR, 2021. ISSN: 2640-3498.
- Certified data removal from machine learning models. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of ICML’20, pages 3832–3842. JMLR.org, 2020.
- Adaptive machine unlearning. In Advances in Neural Information Processing Systems, volume 34, pages 16319–16330. Curran Associates, Inc., 2021.
- Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. IEEE, 2016.
- Model sparsity can simplify machine unlearning. In Neural Information Processing Systems, 2023.
- Supervised contrastive learning. In Advances in Neural Information Processing Systems, volume 33, pages 18661–18673. Curran Associates, Inc., 2020.
- No matter how you slice it: Machine unlearning with SISA comes at the expense of minority classes. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), pages 622–637, 2023.
- Towards unbounded machine unlearning, 2023.
- Erm-ktp: Knowledge-level machine unlearning via knowledge transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20147–20155, 2023.
- Alessandro Mantelero. The EU proposal for a general data protection regulation and the roots of the ‘right to be forgotten’. Computer Law & Security Review, 29(3):229–235, 2024.
- Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10422–10431, 2022.
- Descent-to-delete: Gradient-based methods for machine unlearning. In Proceedings of the 32nd International Conference on Algorithmic Learning Theory, pages 931–962. PMLR, 2024. ISSN: 2640-3498.
- Membership Inference Attacks Against Machine Learning Models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18, May 2017. ISSN: 2375-1207.
- J. Shore and R. Johnson. Properties of cross-entropy minimization. IEEE Transactions on Information Theory, 27(4):472–482, July 1981. Conference Name: IEEE Transactions on Information Theory.
- Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, pages 1–10, 2023.
- On the necessity of auditable algorithmic definitions for machine unlearning. In 31st USENIX Security Symposium (USENIX Security 22), pages 4007–4022, 2022.
- Certified edge unlearning for graph neural networks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 2606–2617. ACM, 2023.
- Arcane: An efficient architecture for exact machine unlearning. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 4006–4013, 2022.