Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Machine unlearning through fine-grained model parameters perturbation (2401.04385v4)

Published 9 Jan 2024 in cs.LG and cs.AI

Abstract: Machine unlearning techniques, which involve retracting data records and reducing influence of said data on trained models, help with the user privacy protection objective but incur significant computational costs. Weight perturbation-based unlearning is a general approach, but it typically involves globally modifying the parameters. We propose fine-grained Top-K and Random-k parameters perturbed inexact machine unlearning strategies that address the privacy needs while keeping the computational costs tractable. In order to demonstrate the efficacy of our strategies we also tackle the challenge of evaluating the effectiveness of machine unlearning by considering the model's generalization performance across both unlearning and remaining data. To better assess the unlearning effect and model generalization, we propose novel metrics, namely, the forgetting rate and memory retention rate. However, for inexact machine unlearning, current metrics are inadequate in quantifying the degree of forgetting that occurs after unlearning strategies are applied. To address this, we introduce SPD-GAN, which subtly perturbs the distribution of data targeted for unlearning. Then, we evaluate the degree of unlearning by measuring the performance difference of the models on the perturbed unlearning data before and after the unlearning process. By implementing these innovative techniques and metrics, we achieve computationally efficacious privacy protection in machine learning applications without significant sacrifice of model performance. Furthermore, this approach provides a novel method for evaluating the degree of unlearning.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. Michelle Goddard. The eu general data protection regulation (gdpr): European regulation that has a global impact. International Journal of Market Research, 59(6):703–705, 2017.
  2. Eric Goldman. An introduction to the california consumer privacy act (ccpa). Santa Clara Univ. Legal Studies Research Paper, 2020.
  3. Privacy and evolutionary cooperation in neural-network-based game theory. Knowledge-Based Systems, 282:111076, 2023.
  4. Privacy-preserving trust management method based on blockchain for cross-domain industrial iot. Knowledge-Based Systems, page 111166, 2023.
  5. Collaborative filtering recommender systems. The adaptive web: methods and strategies of web personalization, pages 291–324, 2007.
  6. Recommendation systems: Principles, methods and evaluation. Egyptian informatics journal, 16(3):261–273, 2015.
  7. On the necessity of auditable algorithmic definitions for machine unlearning. In 31st USENIX Security Symposium (USENIX Security 22), pages 4007–4022, 2022.
  8. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
  9. Understanding black-box predictions via influence functions. In International conference on machine learning, pages 1885–1894. PMLR, 2017.
  10. Influence functions in deep learning are fragile. arXiv preprint arXiv:2006.14651, 2020.
  11. Selective and collaborative influence function for efficient recommendation unlearning. Expert Systems with Applications, page 121025, 2023.
  12. Ssse: Efficiently erasing samples from trained machine learning models. arXiv preprint arXiv:2107.03860, 2021.
  13. Evaluating inexact unlearning requires revisiting forgetting. CoRR, abs/2201.06640, 2022.
  14. Algorithms that approximate data removal: New results and limitations. Advances in Neural Information Processing Systems, 35:18892–18903, 2022.
  15. Descent-to-delete: Gradient-based methods for machine unlearning. In Algorithmic Learning Theory, pages 931–962. PMLR, 2021.
  16. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9304–9312, 2020.
  17. Towards unbounded machine unlearning. arXiv preprint arXiv:2302.09880, 2023.
  18. Distributional generalization: A new kind of generalization. arXiv preprint arXiv:2009.08092, 2020.
  19. Hard to forget: Poisoning attacks on certified machine unlearning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 7691–7700, 2022.
  20. Karen Becker. Organizational unlearning: time to expand our horizons? The Learning Organization, 25(3):180–189, 2018.
  21. Fast yet effective machine unlearning. arXiv preprint arXiv:2111.08947, 2021.
  22. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3–18. IEEE, 2017.
  23. Membership inference via backdooring. arXiv preprint arXiv:2206.04823, 2022.
  24. Deltagrad: Rapid retraining of machine learning models. In International Conference on Machine Learning, pages 10355–10366. PMLR, 2020.
  25. Zero-shot machine unlearning. IEEE Transactions on Information Forensics and Security, 2023.
  26. Data privacy through optimal k-anonymization. In 21st International conference on data engineering (ICDE’05), pages 217–228. IEEE, 2005.
  27. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, pages 265–284. Springer, 2006.
  28. Introduction to differential power analysis. Journal of Cryptographic Engineering, 1:5–27, 2011.
  29. Scalable private learning with pate. arXiv preprint arXiv:1802.08908, 2018.
  30. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020.
  31. Sca: Sybil-based collusion attacks of iiot data poisoning in federated learning. IEEE Transactions on Industrial Informatics, 2022.
  32. Sbpa: Sybil-based backdoor poisoning attacks for distributed big data in aiot-based federated learning system. IEEE Transactions on Big Data, 2022.
  33. Continual learning and private unlearning. In Conference on Lifelong Learning Agents, pages 243–254. PMLR, 2022.
  34. Unlearning protected user attributes in recommendations with adversarial training. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2142–2147, 2022.
  35. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference, pages 689–698, 2018.
  36. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pages 1749–1758. IEEE, 2022.
  37. Challenges and pitfalls of bayesian unlearning. arXiv preprint arXiv:2207.03227, 2022.
  38. Revfrf: Enabling cross-domain random forest training with revocable federated learning. IEEE Transactions on Dependable and Secure Computing, 19(6):3671–3685, 2021.
  39. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  40. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  41. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. Ieee, 2017.
  42. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), pages 372–387. IEEE, 2016.
  43. Can bad teaching induce forgetting? unlearning in deep networks using an incompetent teacher. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 7210–7217, 2023.
  44. Learning multiple layers of features from tiny images. 2009.
  45. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  46. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  47. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
  48. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.

Summary

We haven't generated a summary for this paper yet.