Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Federated Unlearning via Active Forgetting (2307.03363v1)

Published 7 Jul 2023 in cs.LG

Abstract: The increasing concerns regarding the privacy of machine learning models have catalyzed the exploration of machine unlearning, i.e., a process that removes the influence of training data on machine learning models. This concern also arises in the realm of federated learning, prompting researchers to address the federated unlearning problem. However, federated unlearning remains challenging. Existing unlearning methods can be broadly categorized into two approaches, i.e., exact unlearning and approximate unlearning. Firstly, implementing exact unlearning, which typically relies on the partition-aggregation framework, in a distributed manner does not improve time efficiency theoretically. Secondly, existing federated (approximate) unlearning methods suffer from imprecise data influence estimation, significant computational burden, or both. To this end, we propose a novel federated unlearning framework based on incremental learning, which is independent of specific models and federated settings. Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation. Instead, we leverage new memories to overwrite old ones, imitating the process of \textit{active forgetting} in neurology. Specifically, the model, intended to unlearn, serves as a student model that continuously learns from randomly initiated teacher models. To preserve catastrophic forgetting of non-target data, we utilize elastic weight consolidation to elastically constrain weight change. Extensive experiments on three benchmark datasets demonstrate the efficiency and effectiveness of our proposed method. The result of backdoor attacks demonstrates that our proposed method achieves satisfying completeness.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Memory aware synapses: Learning what (not) to forget. In ECCV. 139–154.
  2. Michael C Anderson and Justin C Hulbert. 2021. Active forgetting: Adaptation of memory by prefrontal control. Annual Review of Psychology (2021).
  3. Influence Functions in Deep Learning Are Fragile. In ICLR.
  4. Machine unlearning. In Proceedings in the 42nd IEEE Symposium on Security and Privacy (SP).
  5. Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In Proceedings in the 36th IEEE Symposium on Security and Privacy (SP). 463–480.
  6. Recommendation unlearning. In Proceedings of the ACM Web Conference 2022. 2768–2777.
  7. Overcoming catastrophic forgetting by bayesian generative regularization. In ICML. PMLR, 1760–1770.
  8. Ronald L Davis and Yi Zhong. 2017. The biology of forgetting—a perspective. Neuron 95, 3 (2017), 490–503.
  9. Explaining classification performance and bias via network structure and sampling technique. Applied Network Science 6, 1 (2021), 1–25.
  10. VeriFi: Towards Verifiable Federated Unlearning. CoRR abs/2205.12709 (2022). https://doi.org/10.48550/arXiv.2205.12709 arXiv:2205.12709
  11. Making AI forget you: Data deletion in machine learning. In NeurIPS.
  12. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 249–256.
  13. Amnesiac machine learning. In AAAI, Vol. 35. 11516–11524.
  14. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230–47244.
  15. FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 10333–10350.
  16. Federated Unlearning: How to Efficiently Erase a Client in FL? CoRR abs/2207.05521 (2022). https://doi.org/10.48550/arXiv.2207.05521 arXiv:2207.05521
  17. Deep residual learning for image recognition. In ICCV. 770–778.
  18. Distilling the Knowledge in a Neural Network. stat 1050 (2015), 9.
  19. Membership Inference via Backdooring. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, Luc De Raedt (Ed.). ijcai.org, 3832–3838. https://doi.org/10.24963/ijcai.2022/532
  20. Overcoming catastrophic forgetting for continual learning via model adaptation. In ICLR.
  21. Highly accurate protein structure prediction with AlphaFold. Nature 596, 7873 (2021), 583–589.
  22. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In ICML. PMLR, 1885–1894.
  23. On the Accuracy of Influence Functions for Measuring Group Effects. Advances in Neural Information Processing Systems 32 (2019), 5254–5264.
  24. Sliced cramer synaptic consolidation for preserving deeply learned representations. In ICLR.
  25. Learning multiple layers of features from tiny images. (2009).
  26. Deep learning. nature 521, 7553 (2015), 436–444.
  27. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
  28. Data poisoning attacks on factorization-based collaborative filtering. Advances in neural information processing systems 29 (2016).
  29. Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In ICML. PMLR, 3925–3934.
  30. FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models. In 29th IEEE/ACM International Symposium on Quality of Service, IWQOS 2021, Tokyo, Japan, June 25-28, 2021. IEEE, 1–10. https://doi.org/10.1109/IWQOS52092.2021.9521274
  31. The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining. In IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, London, United Kingdom, May 2-5, 2022. IEEE, 1749–1758.
  32. Deep Learning Face Attributes in the Wild. In ICCV.
  33. James Martens. 2020. New insights and perspectives on the natural gradient method. The Journal of Machine Learning Research 21, 1 (2020), 5776–5851.
  34. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.
  35. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
  36. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10422–10431.
  37. OpenAI. 2023. GPT-4 Technical Report. CoRR abs/2303.08774 (2023). https://doi.org/10.48550/arXiv.2303.08774 arXiv:2303.08774
  38. Stuart L Pardau. 2018. The California consumer privacy act: Towards a European-style privacy regime in the United States. J. Tech. L. & Pol’y 23 (2018), 68.
  39. HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning. In Proceedings of the 2021 International Conference on Management of Data. 1545–1557.
  40. Progress & compress: A scalable framework for continual learning. In ICML. PMLR, 4528–4537.
  41. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems 34 (2021).
  42. Athena: Probabilistic Verification of Machine Unlearning. Proceedings on Privacy Enhancing Technologies 3 (2022), 268–290.
  43. Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10, 3152676 (2017), 10–5555.
  44. Federated Unlearning via Class-Discriminative Pruning. In WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini (Eds.). ACM, 622–632. https://doi.org/10.1145/3485447.3512222
  45. Memory replay gans: Learning to generate new categories without forgetting. In NeurIPS, Vol. 31.
  46. Federated Unlearning with Knowledge Distillation. CoRR abs/2201.09441 (2022). arXiv:2201.09441 https://arxiv.org/abs/2201.09441
  47. Puma: Performance unchanged model augmentation for training data removal. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8675–8682.
  48. Federated Unlearning: Guarantee the Right of Clients to Forget. IEEE Netw. 36, 5 (2022), 129–135. https://doi.org/10.1109/MNET.001.2200198
  49. ARCANE: An Efficient Architecture for Exact Machine Unlearning. In Proceedings of the 31st International Joint Conference on Artificial Intelligence.
  50. Federated Unlearning for On-Device Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, WSDM 2023, Singapore, 27 February 2023 - 3 March 2023, Tat-Seng Chua, Hady W. Lauw, Luo Si, Evimaria Terzi, and Panayiotis Tsaparas (Eds.). ACM, 393–401. https://doi.org/10.1145/3539597.3570463
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yuyuan Li (24 papers)
  2. Chaochao Chen (87 papers)
  3. Xiaolin Zheng (52 papers)
  4. Jiaming Zhang (117 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.