Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards Reliable Empirical Machine Unlearning Evaluation: A Cryptographic Game Perspective

Published 17 Apr 2024 in cs.LG and cs.AI | (2404.11577v3)

Abstract: Machine unlearning updates machine learning models to remove information from specific training samples, complying with data protection regulations that allow individuals to request the removal of their personal data. Despite the recent development of numerous unlearning algorithms, reliable evaluation of these algorithms remains an open research question. In this work, we focus on membership inference attack (MIA) based evaluation, one of the most common approaches for evaluating unlearning algorithms, and address various pitfalls of existing evaluation metrics lacking theoretical understanding and reliability. Specifically, by modeling the proposed evaluation process as a \emph{cryptographic game} between unlearning algorithms and MIA adversaries, the naturally-induced evaluation metric measures the data removal efficacy of unlearning algorithms and enjoys provable guarantees that existing evaluation metrics fail to satisfy. Furthermore, we propose a practical and efficient approximation of the induced evaluation metric and demonstrate its effectiveness through both theoretical analysis and empirical experiments. Overall, this work presents a novel and reliable approach to empirically evaluating unlearning algorithms, paving the way for the development of more effective unlearning techniques.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (35)
  1. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS’16. ACM, October 2016. doi: 10.1145/2976749.2978318. URL http://dx.doi.org/10.1145/2976749.2978318.
  2. Evaluating machine unlearning via epistemic uncertainty, 2022.
  3. Scalable membership inference attacks via quantile regression. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=t3WCiGjHqd.
  4. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
  5. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463–480. IEEE, 2015.
  6. Membership inference attacks from first principles. CoRR, abs/2112.03570, 2021. URL https://arxiv.org/abs/2112.03570.
  7. Membership inference attacks from first principles. In 2022 IEEE Symposium on Security and Privacy (SP), pages 1897–1914. IEEE, 2022.
  8. CCPA. California consumer privacy act (ccpa), 2018. URL https://oag.ca.gov/privacy/ccpa.
  9. When machine unlearning jeopardizes privacy. In Proceedings of the 2021 ACM SIGSAC conference on computer and communications security, pages 896–911, 2021.
  10. Graph unlearning. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS ’22. ACM, November 2022. doi: 10.1145/3548606.3559352. URL http://dx.doi.org/10.1145/3548606.3559352.
  11. Efficient model updates for approximate unlearning of graph-structured data. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=fhcu4FBLciL.
  12. Re-aligning shadow models can improve white-box membership inference attacks. arXiv preprint arXiv:2306.05093, 2023.
  13. Towards adversarial evaluations for inexact machine unlearning, 2023.
  14. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9301–9309, Los Alamitos, CA, USA, jun 2020. IEEE Computer Society. doi: 10.1109/CVPR42600.2020.00932. URL https://doi.ieeecomputersociety.org/10.1109/CVPR42600.2020.00932.
  15. Mixed-privacy forgetting in deep networks. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 792–801, 2021. doi: 10.1109/CVPR46437.2021.00085.
  16. Amnesiac machine learning. In AAAI Conference on Artificial Intelligence, 2020. URL https://api.semanticscholar.org/CorpusID:224817947.
  17. Certified data removal from machine learning models. In International Conference on Machine Learning, pages 3832–3842. PMLR, 2020.
  18. Inexact unlearning needs more careful evaluations to avoid a false sense of privacy. arXiv preprint arXiv:2403.01218, 2024.
  19. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  20. Deepobliviate: a powerful charm for erasing data residual memory in deep neural networks. arXiv preprint arXiv:2105.06209, 2021.
  21. Approximate data deletion from machine learning models. In International Conference on Artificial Intelligence and Statistics, pages 2008–2016. PMLR, 2021.
  22. Learning multiple layers of features from tiny images. University of Toronto, 2009.
  23. Towards unbounded machine unlearning. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, pages 1957–1987. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/062d711fb777322e2152435459e6e9d9-Paper-Conference.pdf.
  24. Alessandro Mantelero. The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten’. Computer Law & Security Review, 29(3):229–235, 2013. ISSN 0267-3649. doi: https://doi.org/10.1016/j.clsr.2013.03.010. URL https://www.sciencedirect.com/science/article/pii/S0267364913000654.
  25. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory, pages 931–962. PMLR, 2021.
  26. Ssse: Efficiently erasing samples from trained machine learning models, 2021.
  27. Learn to unlearn: Insights into machine unlearning. Computer, 57(3):79–90, 2024.
  28. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34:18075–18086, 2021.
  29. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3–18, 2017. doi: 10.1109/SP.2017.41.
  30. Towards probabilistic verification of machine unlearning. CoRR, abs/2003.04247, 2020. URL https://arxiv.org/abs/2003.04247.
  31. Systematic evaluation of privacy risks of machine learning models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2615–2632, 2021.
  32. Evaluation for the NeurIPS Machine Unlearning Competition. https://unlearning-challenge.github.io/assets/data/Machine_Unlearning_Metric.pdf, 2023. [Accessed 10-01-2024].
  33. Deltagrad: Rapid retraining of machine learning models. In International Conference on Machine Learning, pages 10355–10366. PMLR, 2020.
  34. Machine unlearning: A survey, 2023.
  35. Opacus: User-friendly differential privacy library in PyTorch. arXiv preprint arXiv:2109.12298, 2021.
Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 3 likes about this paper.