Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Machine Unlearning Benchmarks: Forgetting the Personal Identities in Facial Recognition Systems (2311.02240v2)

Published 3 Nov 2023 in cs.CV

Abstract: Machine unlearning is a crucial tool for enabling a classification model to forget specific data that are used in the training time. Recently, various studies have presented machine unlearning algorithms and evaluated their methods on several datasets. However, most of the current machine unlearning algorithms have been evaluated solely on traditional computer vision datasets such as CIFAR-10, MNIST, and SVHN. Furthermore, previous studies generally evaluate the unlearning methods in the class-unlearning setup. Most previous work first trains the classification models and then evaluates the machine unlearning performance of machine unlearning algorithms by forgetting selected image classes (categories) in the experiments. Unfortunately, these class-unlearning settings might not generalize to real-world scenarios. In this work, we propose a machine unlearning setting that aims to unlearn specific instance that contains personal privacy (identity) while maintaining the original task of a given model. Specifically, we propose two machine unlearning benchmark datasets, MUFAC and MUCAC, that are greatly useful to evaluate the performance and robustness of a machine unlearning algorithm. In our benchmark datasets, the original model performs facial feature recognition tasks: face age estimation (multi-class classification) and facial attribute classification (binary class classification), where a class does not depend on any single target subject (personal identity), which can be a realistic setting. Moreover, we also report the performance of the state-of-the-art machine unlearning methods on our proposed benchmark datasets. All the datasets, source codes, and trained models are publicly available at https://github.com/ndb796/MachineUnlearning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), 141–159. IEEE.
  2. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877–1901.
  3. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), 2633–2650.
  4. Can bad teaching induce forgetting? Unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 7210–7217.
  5. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security.
  6. Evaluating inexact unlearning requires revisiting forgetting. arXiv preprint arXiv:2201.06640.
  7. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9304–9312.
  8. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In ECCV 2020: 16th European Conference, Glasgow, UK, 2020, Proceedings, 383–398. Springer.
  9. Adaptive machine unlearning. Advances in Neural Information Processing Systems, 34: 16319–16330.
  10. A survey on vision transformer. IEEE transactions on pattern analysis and machine intelligence, 45(1): 87–110.
  11. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
  12. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT, 4171–4186.
  13. Learning multiple layers of features from tiny images.
  14. Towards Unbounded Machine Unlearning. arXiv preprint arXiv:2302.09880.
  15. GPT understands, too. AI Open.
  16. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018): 11.
  17. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10422–10431.
  18. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory, 931–962. PMLR.
  19. Reading digits in natural images with unsupervised feature learning.
  20. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34: 18075–18086.
  21. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), 3–18. IEEE.
  22. Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning, 6105–6114. PMLR.
  23. Fast yet effective machine unlearning. IEEE Transactions on Neural Networks and Learning Systems.
  24. Unrolling sgd: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), 303–319. IEEE.
  25. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
  26. Machine unlearning via algorithmic stability. In Conference on Learning Theory, 4126–4142. PMLR.
  27. Attention is all you need. Advances in neural information processing systems, 30.
  28. Federated unlearning via class-discriminative pruning. In Proceedings of the ACM Web Conference 2022, 622–632.
  29. Bag of Tricks for Training Data Extraction from Language Models. In Proceedings of the 40th International Conference on Machine Learning, ICML’23.
  30. Machine unlearning for image retrieval: A generative scrubbing approach. In Proceedings of the 30th ACM International Conference on Multimedia, 237–245.
Citations (14)

Summary

We haven't generated a summary for this paper yet.