Class-wise Federated Unlearning: Harnessing Active Forgetting with Teacher-Student Memory Generation
Abstract: Privacy concerns associated with machine learning models have driven research into machine unlearning, which aims to erase the memory of specific target training data from already trained models. This issue also arises in federated learning, creating the need to address the federated unlearning problem. However, federated unlearning remains a challenging task. On the one hand, current research primarily focuses on unlearning all data from a client, overlooking more fine-grained unlearning targets, e.g., class-wise and sample-wise removal. On the other hand, existing methods suffer from imprecise estimation of data influence and impose significant computational or storage burden. To address these issues, we propose a neuro-inspired federated unlearning framework based on active forgetting, which is independent of model architectures and suitable for fine-grained unlearning targets. Our framework distinguishes itself from existing methods by utilizing new memories to overwrite old ones. These new memories are generated through teacher-student learning. We further utilize refined elastic weight consolidation to mitigate catastrophic forgetting of non-target data. Extensive experiments on benchmark datasets demonstrate the efficiency and effectiveness of our method, achieving satisfactory unlearning completeness against backdoor attacks.
- Memory aware synapses: Learning what (not) to forget. In ECCV. 139–154.
- Michael C Anderson and Justin C Hulbert. 2021. Active forgetting: Adaptation of memory by prefrontal control. Annual Review of Psychology (2021).
- Influence Functions in Deep Learning Are Fragile. In ICLR.
- Machine unlearning. In Proceedings in the 42nd IEEE Symposium on Security and Privacy (SP).
- Yinzhi Cao and Junfeng Yang. 2015. Towards making systems forget with machine unlearning. In Proceedings in the 36th IEEE Symposium on Security and Privacy (SP). 463–480.
- Recommendation unlearning. In Proceedings of the ACM Web Conference 2022. 2768–2777.
- Overcoming catastrophic forgetting by bayesian generative regularization. In ICML. PMLR, 1760–1770.
- Ronald L Davis and Yi Zhong. 2017. The biology of forgetting—a perspective. Neuron 95, 3 (2017), 490–503.
- Explaining classification performance and bias via network structure and sampling technique. Applied Network Science 6, 1 (2021), 1–25.
- VeriFi: Towards Verifiable Federated Unlearning. CoRR abs/2205.12709 (2022). https://doi.org/10.48550/arXiv.2205.12709 arXiv:2205.12709
- Making AI forget you: Data deletion in machine learning. In NeurIPS.
- Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 249–256.
- Amnesiac machine learning. In AAAI, Vol. 35. 11516–11524.
- Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230–47244.
- FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 10333–10350.
- Federated Unlearning: How to Efficiently Erase a Client in FL? CoRR abs/2207.05521 (2022). https://doi.org/10.48550/arXiv.2207.05521 arXiv:2207.05521
- Deep residual learning for image recognition. In ICCV. 770–778.
- Distilling the Knowledge in a Neural Network. stat 1050 (2015), 9.
- Membership Inference via Backdooring. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, Luc De Raedt (Ed.). ijcai.org, 3832–3838. https://doi.org/10.24963/ijcai.2022/532
- Overcoming catastrophic forgetting for continual learning via model adaptation. In ICLR.
- Highly accurate protein structure prediction with AlphaFold. Nature 596, 7873 (2021), 583–589.
- Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In ICML. PMLR, 1885–1894.
- On the Accuracy of Influence Functions for Measuring Group Effects. Advances in Neural Information Processing Systems 32 (2019), 5254–5264.
- Sliced cramer synaptic consolidation for preserving deeply learned representations. In ICLR.
- Learning multiple layers of features from tiny images. (2009).
- Deep learning. nature 521, 7553 (2015), 436–444.
- Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
- Data poisoning attacks on factorization-based collaborative filtering. Advances in neural information processing systems 29 (2016).
- Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. In ICML. PMLR, 3925–3934.
- FedEraser: Enabling Efficient Client-Level Data Removal from Federated Learning Models. In 29th IEEE/ACM International Symposium on Quality of Service, IWQOS 2021, Tokyo, Japan, June 25-28, 2021. IEEE, 1–10. https://doi.org/10.1109/IWQOS52092.2021.9521274
- The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining. In IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, London, United Kingdom, May 2-5, 2022. IEEE, 1749–1758.
- Deep Learning Face Attributes in the Wild. In ICCV.
- James Martens. 2020. New insights and perspectives on the natural gradient method. The Journal of Machine Learning Research 21, 1 (2020), 5776–5851.
- Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.
- A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR) 54, 6 (2021), 1–35.
- Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10422–10431.
- OpenAI. 2023. GPT-4 Technical Report. CoRR abs/2303.08774 (2023). https://doi.org/10.48550/arXiv.2303.08774 arXiv:2303.08774
- Stuart L Pardau. 2018. The California consumer privacy act: Towards a European-style privacy regime in the United States. J. Tech. L. & Pol’y 23 (2018), 68.
- HedgeCut: Maintaining Randomised Trees for Low-Latency Machine Unlearning. In Proceedings of the 2021 International Conference on Management of Data. 1545–1557.
- Progress & compress: A scalable framework for continual learning. In ICML. PMLR, 4528–4537.
- Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems 34 (2021).
- Athena: Probabilistic Verification of Machine Unlearning. Proceedings on Privacy Enhancing Technologies 3 (2022), 268–290.
- Paul Voigt and Axel Von dem Bussche. 2017. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer International Publishing 10, 3152676 (2017), 10–5555.
- Federated Unlearning via Class-Discriminative Pruning. In WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini (Eds.). ACM, 622–632. https://doi.org/10.1145/3485447.3512222
- Memory replay gans: Learning to generate new categories without forgetting. In NeurIPS, Vol. 31.
- Federated Unlearning with Knowledge Distillation. CoRR abs/2201.09441 (2022). arXiv:2201.09441 https://arxiv.org/abs/2201.09441
- Puma: Performance unchanged model augmentation for training data removal. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8675–8682.
- Federated Unlearning: Guarantee the Right of Clients to Forget. IEEE Netw. 36, 5 (2022), 129–135. https://doi.org/10.1109/MNET.001.2200198
- ARCANE: An Efficient Architecture for Exact Machine Unlearning. In Proceedings of the 31st International Joint Conference on Artificial Intelligence.
- Federated Unlearning for On-Device Recommendation. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, WSDM 2023, Singapore, 27 February 2023 - 3 March 2023, Tat-Seng Chua, Hady W. Lauw, Luo Si, Evimaria Terzi, and Panayiotis Tsaparas (Eds.). ACM, 393–401. https://doi.org/10.1145/3539597.3570463
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.