Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 163 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 36 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 125 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation (2311.15603v2)

Published 27 Nov 2023 in cs.LG and cs.AI

Abstract: Federated Unlearning (FU) aims to delete specific training data from an ML model trained using Federated Learning (FL). We introduce QuickDrop, an efficient and original FU method that utilizes dataset distillation (DD) to accelerate unlearning and drastically reduces computational overhead compared to existing approaches. In QuickDrop, each client uses DD to generate a compact dataset representative of the original training dataset, called a distilled dataset, and uses this compact dataset during unlearning. To unlearn specific knowledge from the global model, QuickDrop has clients execute Stochastic Gradient Ascent with samples from the distilled datasets, thus significantly reducing computational overhead compared to conventional FU methods. We further increase the efficiency of QuickDrop by ingeniously integrating DD into the FL training process. By reusing the gradient updates produced during FL training for DD, the overhead of creating distilled datasets becomes close to negligible. Evaluations on three standard datasets show that, with comparable accuracy guarantees, QuickDrop reduces the duration of unlearning by 463.8x compared to model retraining from scratch and 65.1x compared to existing FU approaches. We also demonstrate the scalability of QuickDrop with 100 clients and show its effectiveness while handling multiple unlearning operations.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (38)
  1. Gradient based sample selection for online continual learning. Advances in neural information processing systems, 32, 2019.
  2. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pp.  141–159. IEEE, 2021.
  3. Machine unlearning for random forests. In International Conference on Machine Learning, pp.  1092–1104. PMLR, 2021.
  4. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pp.  463–480. IEEE, 2015.
  5. Dataset distillation by matching training trajectories. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  4750–4759, 2022.
  6. When machine unlearning jeopardizes privacy. In Proceedings of the 2021 ACM SIGSAC conference on computer and communications security, pp.  896–911, 2021.
  7. Lifelong anomaly detection through unlearning. In Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, pp.  1283–1297, 2019.
  8. European Union. Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation). Official Journal of the European Union, 2018. URL https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679. OJ L 119, 4.5.2016, p. 1–88.
  9. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.  4367–4375, 2018.
  10. Making ai forget you: Data deletion in machine learning. Advances in neural information processing systems, 32, 2019.
  11. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  9304–9312, 2020a.
  12. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX 16, pp.  383–398. Springer, 2020b.
  13. Mixed-privacy forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  792–801, 2021.
  14. Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030, 2019.
  15. Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences, 24(12):1028–1040, 2020.
  16. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335, 2019.
  17. Learning multiple layers of features from tiny images. 2009.
  18. Are all training examples equally valuable? arXiv preprint arXiv:1311.6510, 2013.
  19. Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
  20. Federaser: Enabling efficient client-level data removal from federated learning models. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pp.  1–10. IEEE, 2021.
  21. Machine learning for internet of things data analysis: A survey. Digital Communications and Networks, 4(3):161–175, 2018.
  22. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp.  1273–1282. PMLR, 2017.
  23. Reading digits in natural images with unsupervised feature learning. 2011.
  24. Variational bayesian unlearning. Advances in Neural Information Processing Systems, 33:16025–16036, 2020.
  25. A comprehensive survey of neural architecture search: Challenges and solutions. ACM Computing Surveys (CSUR), 54(4):1–34, 2021.
  26. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017.
  27. Federated learning via decentralized dataset distillation in resource-constrained edge environments. In 2023 International Joint Conference on Neural Networks (IJCNN), pp.  1–10, 2023. doi: 10.1109/IJCNN54540.2023.10191879.
  28. Federated unlearning via class-discriminative pruning. In Proceedings of the ACM Web Conference 2022, pp.  622–632, 2022a.
  29. Cafe: Learning to condense dataset by aligning features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.  12196–12205, 2022b.
  30. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.
  31. Federated unlearning: Guarantee the right of clients to forget. IEEE Network, 36(5):129–135, 2022.
  32. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1–19, 2019.
  33. A review on machine unlearning. SN Computer Science, 4(4):337, 2023.
  34. Bo Zhao and Hakan Bilen. Dataset condensation with differentiable siamese augmentation. In International Conference on Machine Learning, pp.  12674–12685. PMLR, 2021.
  35. Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp.  6514–6523, 2023.
  36. Dataset condensation with gradient matching. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=mSAKhLYLSsl.
  37. Distilled one-shot federated learning. arXiv preprint arXiv:2009.07999, 2020.
  38. Data-free knowledge distillation for heterogeneous federated learning. In International Conference on Machine Learning, pp.  12878–12889. PMLR, 2021.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.