Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unlearning via Sparse Representations (2311.15268v2)

Published 26 Nov 2023 in cs.LG and cs.AI

Abstract: Machine \emph{unlearning}, which involves erasing knowledge about a \emph{forget set} from a trained model, can prove to be costly and infeasible by existing techniques. We propose a nearly compute-free zero-shot unlearning technique based on a discrete representational bottleneck. We show that the proposed technique efficiently unlearns the forget set and incurs negligible damage to the model's performance on the rest of the data set. We evaluate the proposed technique on the problem of \textit{class unlearning} using three datasets: CIFAR-10, CIFAR-100, and LACUNA-100. We compare the proposed technique to SCRUB, a state-of-the-art approach which uses knowledge distillation for unlearning. Across all three datasets, the proposed technique performs as well as, if not better than SCRUB while incurring almost no computational cost.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (37)
  1. VGGFace2: A dataset for recognising faces across pose and age. In International Conference on Automatic Face and Gesture Recognition, 2018.
  2. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pp. 463–480, 2015. doi: 10.1109/SP.2015.35.
  3. Incremental and decremental support vector machine learning. Advances in neural information processing systems, 13, 2000.
  4. Boundary unlearning: Rapid forgetting of deep networks via shifting the decision boundary. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.  7766–7775, 2023. URL https://api.semanticscholar.org/CorpusID:257636742.
  5. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, 2023.
  6. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
  7. Decremental learning algorithms for nonlinear langrangian and least squares support vector machines. 2007. URL https://api.semanticscholar.org/CorpusID:13986244.
  8. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.  12873–12883, 2021.
  9. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635, 2018.
  10. Making ai forget you: Data deletion in machine learning. Advances in neural information processing systems, 32, 2019.
  11. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9304–9312, 2020a.
  12. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, pp.  383–398. Springer, 2020b.
  13. Coordination among neural modules through a shared global workspace. arXiv preprint arXiv:2103.01197, 2021.
  14. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030, 2019.
  15. Approximate data deletion from machine learning models. In International Conference on Artificial Intelligence and Statistics, pp.  2008–2016. PMLR, 2021.
  16. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018.
  17. Perceiver: General perception with iterative attention. In International conference on machine learning, pp. 4651–4664. PMLR, 2021.
  18. Model sparsification can simplify machine unlearning. arXiv preprint arXiv:2304.04934, 2023.
  19. Learning multiple layers of features from tiny images. 2009.
  20. Towards unbounded machine unlearning. arXiv preprint arXiv:2302.09880, 2023.
  21. Let machines unlearn - machine unlearning and the right to be forgotten. In Americas Conference on Information Systems, 2017. URL https://api.semanticscholar.org/CorpusID:10605911.
  22. Discrete-valued neural communication. Advances in Neural Information Processing Systems, 34:2109–2121, 2021.
  23. Stateful active facilitator: Coordination and environmental heterogeneity in cooperative multi-agent reinforcement learning. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=B4maZQLLW0_.
  24. Alessandro Mantelero. The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten’. Computer Law & Security Review, 29(3):229–235, 2013. ISSN 0267-3649. doi: https://doi.org/10.1016/j.clsr.2013.03.010. URL https://www.sciencedirect.com/science/article/pii/S0267364913000654.
  25. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  10422–10431, 2022.
  26. A survey of machine unlearning. arXiv preprint arXiv:2209.02299, 2022.
  27. Neural discrete representation learning. arXiv preprint arXiv:1711.00937, 2017.
  28. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748–8763. PMLR, 2021.
  29. Generating diverse high-fidelity images with vq-vae-2. Advances in neural information processing systems, 32, 2019.
  30. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pp. 3–18. IEEE, 2017.
  31. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems, 2023.
  32. Discrete key-value bottleneck. In International Conference on Machine Learning, pp. 34431–34455. PMLR, 2023.
  33. Incremental and decremental training for linear classification. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pp.  343–352, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450329569. doi: 10.1145/2623330.2623661. URL https://doi.org/10.1145/2623330.2623661.
  34. Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Computer Law & Security Review, 34(2):304–313, 2018. ISSN 0267-3649. doi: https://doi.org/10.1016/j.clsr.2017.08.007. URL https://www.sciencedirect.com/science/article/pii/S0267364917302091.
  35. Machine unlearning of features and labels. arXiv preprint arXiv:2108.11577, 2021.
  36. Machine unlearning: A survey. ACM Computing Surveys, 56(1):1–36, 2023.
  37. A review on machine unlearning. SN Computer Science, 4(4):337, Apr 2023. ISSN 2661-8907. doi: 10.1007/s42979-023-01767-4. URL https://doi.org/10.1007/s42979-023-01767-4.
Citations (7)

Summary

We haven't generated a summary for this paper yet.