Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Privacy-Preserving Federated Unlearning with Certified Client Removal (2404.09724v1)

Published 15 Apr 2024 in cs.CR

Abstract: In recent years, Federated Unlearning (FU) has gained attention for addressing the removal of a client's influence from the global model in Federated Learning (FL) systems, thereby ensuring the ``right to be forgotten" (RTBF). State-of-the-art methods for unlearning use historical data from FL clients, such as gradients or locally trained models. However, studies have revealed significant information leakage in this setting, with the possibility of reconstructing a user's local data from their uploaded information. Addressing this, we propose Starfish, a privacy-preserving federated unlearning scheme using Two-Party Computation (2PC) techniques and shared historical client data between two non-colluding servers. Starfish builds upon existing FU methods to ensure privacy in unlearning processes. To enhance the efficiency of privacy-preserving FU evaluations, we suggest 2PC-friendly alternatives for certain FU algorithm operations. We also implement strategies to reduce costs associated with 2PC operations and lessen cumulative approximation errors. Moreover, we establish a theoretical bound for the difference between the unlearned global model via Starfish and a global model retrained from scratch for certified client removal. Our theoretical and experimental analyses demonstrate that Starfish achieves effective unlearning with reasonable efficiency, maintaining privacy and security in FL systems.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. K. E. Batcher. Sorting networks and their applications. In Proceedings of the April 30–May 2, 1968, Spring Joint Computer Conference, AFIPS ’68 (Spring), page 307–314, New York, NY, USA, 1968. Association for Computing Machinery.
  2. Donald Beaver. Efficient multiparty protocols using circuit randomization. In Joan Feigenbaum, editor, Advances in Cryptology — CRYPTO ’91, pages 420–432, Berlin, Heidelberg, 1992. Springer Berlin Heidelberg.
  3. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017.
  4. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141–159. IEEE, 2021.
  5. Representations of quasi-newton matrices and their use in limited memory methods. Mathematical Programming, 63(1-3):129–156, 1994.
  6. Fltrust: Byzantine-robust federated learning via trust bootstrapping. arXiv preprint arXiv:2012.13995, 2020.
  7. Fltrust: Byzantine-robust federated learning via trust bootstrapping. In 28th Annual Network and Distributed System Security Symposium, NDSS 2021, virtually, February 21-25, 2021, 2021.
  8. Fedrecover: Recovering from poisoning attacks in federated learning using historical information. In 2023 IEEE Symposium on Security and Privacy (SP), pages 1366–1383. IEEE, 2023.
  9. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463–480. IEEE, 2015.
  10. Octavian Catrina. Round-efficient protocols for secure multiparty fixed-point arithmetic. In 2018 International Conference on Communications (COMM), pages 431–436, 2018.
  11. Fast federated machine unlearning with nonlinear functional theory. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 4241–4268. PMLR, 2023.
  12. When machine unlearning jeopardizes privacy. In Proceedings of the 2021 ACM SIGSAC conference on computer and communications security, pages 896–911, 2021.
  13. Secure multiparty computation. Cambridge University Press, 2015.
  14. On secure two-party integer division. In Angelos D. Keromytis, editor, Financial Cryptography and Data Security, pages 164–178, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
  15. New primitives for actively-secure mpc over rings with applications to private machine learning. In 2019 IEEE Symposium on Security and Privacy (SP), pages 1102–1120. IEEE, 2019.
  16. Aby-a framework for efficient mixed-protocol secure two-party computation. In NDSS, 2015.
  17. Strategic data revocation in federated unlearning. In IEEE INFOCOM 2024-IEEE Conference on Computer Communications. IEEE, 2024.
  18. A linear-time 2-party secure merge protocol. In Shlomi Dolev, Jonathan Katz, and Amnon Meisels, editors, Cyber Security, Cryptology, and Machine Learning, pages 408–427, Cham, 2022. Springer International Publishing.
  19. Sequential informed federated unlearning: Efficient and provable client unlearning in federated optimization. arXiv preprint arXiv:2211.11656, 2022.
  20. Verifi: Towards verifiable federated unlearning. arXiv preprint arXiv:2205.12709, 2022.
  21. Privacy-enhanced federated learning with weighted aggregation. In Security and Privacy in Social Networks and Big Data: 7th International Symposium, SocialSec 2021, Fuzhou, China, November 19–21, 2021, Proceedings 7, pages 93–109. Springer, 2021.
  22. Federated unlearning: How to efficiently erase a client in fl? In International Conference on Machine Learning, 2022.
  23. Secure byzantine-robust machine learning. arXiv preprint arXiv:2006.04747, 2020.
  24. A duty to forget, a right to be assured? exposing vulnerabilities in machine unlearning services. arXiv preprint arXiv:2309.08230, 2023.
  25. A duty to forget, a right to be assured? exposing vulnerabilities in machine unlearning services. In NDSS, 2024.
  26. Learn what you want to unlearn: Unlearning inversion attacks against machine unlearning. In 2024 IEEE Symposium on Security and Privacy (SP), 2024.
  27. Cheetah: Lean and fast secure {{\{{two-party}}\}} deep neural network inference. In 31st USENIX Security Symposium (USENIX Security 22), pages 809–826, 2022.
  28. Distributed learning without distress: Privacy-preserving empirical risk minimization. Advances in Neural Information Processing Systems, 31, 2018.
  29. Towards efficient and certified recovery from poisoning attacks in federated learning. arXiv preprint arXiv:2401.08216, 2024.
  30. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 2021.
  31. Federated unlearning via active forgetting. arXiv preprint arXiv:2307.03363, 2023.
  32. Federaser: Enabling efficient client-level data removal from federated learning models. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1–10. IEEE, 2021.
  33. Learn to forget: User-level memorization elimination in federated learning.
  34. Revfrf: Enabling cross-domain random forest training with revocable federated learning. IEEE Transactions on Dependable and Secure Computing, 19(6):3671–3685, 2021.
  35. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pages 1749–1758. IEEE, 2022.
  36. Backdoor attacks via machine unlearning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 14115–14123, 2024.
  37. Efficient dropout-resilient aggregation for privacy-preserving machine learning. IEEE Transactions on Information Forensics and Security, 18:1839–1854, 2022.
  38. Privacy-preserving aggregation in federated learning: A survey. IEEE Transactions on Big Data, 2022.
  39. Dynamic user clustering for efficient and privacy-preserving federated learning. IEEE Transactions on Dependable and Secure Computing, 2024.
  40. A survey on federated unlearning: Challenges, methods, and future directions. arXiv preprint arXiv:2310.20448, 2023.
  41. Long-term privacy-preserving aggregation with user-dynamics for federated learning. IEEE Transactions on Information Forensics and Security, 2023.
  42. Mpc-enabled privacy-preserving neural network training against malicious attack. arXiv preprint arXiv:2007.12557, 2020.
  43. Threats, attacks, and defenses in machine unlearning: A survey. arXiv preprint arXiv:2403.13682, 2024.
  44. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
  45. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP), pages 19–38. IEEE, 2017.
  46. {{\{{ABY2. 0}}\}}: Improved {{\{{Mixed-Protocol}}\}} secure {{\{{Two-Party}}\}} computation. In 30th USENIX Security Symposium (USENIX Security 21), pages 2165–2182, 2021.
  47. Towards understanding and enhancing robustness of deep learning models against malicious unlearning attacks. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1932–1942, 2023.
  48. General Data Protection Regulation. General data protection regulation (gdpr). Intersoft Consulting, Accessed in October, 24(1), 2018.
  49. Poseidon: Privacy-preserving federated neural network learning. arXiv preprint arXiv:2009.00349, 2020.
  50. Efficient differentially private secure aggregation for federated learning via hardness of learning with errors. In 31st USENIX Security Symposium (USENIX Security 22), pages 1379–1395, 2022.
  51. Communication efficient and provable federated unlearning. Proc. VLDB Endow., 17(5):1119–1131, 2024.
  52. EMP-toolkit: Efficient MultiParty computation toolkit. https://github.com/emp-toolkit, 2016.
  53. Unlearning backdoor attacks in federated learning. In ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning, 2023.
  54. Fedme 2: Memory evaluation & erase promoting federated unlearning in dtmn. IEEE Journal on Selected Areas in Communications, 2023.
  55. Privacy-preserving federated deep learning with irregular users. IEEE Transactions on Dependable and Secure Computing, 19(2):1364–1381, 2020.
  56. Fedrecovery: Differentially private machine unlearning for federated learning frameworks. IEEE Transactions on Information Forensics and Security, 2023.
  57. Fltracer: Accurate poisoning attack provenance in federated learning. arXiv preprint arXiv:2310.13424, 2023.
  58. Static and sequential malicious attacks in the context of selective forgetting. Advances in Neural Information Processing Systems, 36, 2024.
  59. Deep leakage from gradients. Advances in Neural Information Processing Systems, 32, 2019.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Ziyao Liu (22 papers)
  2. Huanyi Ye (4 papers)
  3. Yu Jiang (166 papers)
  4. Jiyuan Shen (8 papers)
  5. Jiale Guo (8 papers)
  6. Ivan Tjuawinata (11 papers)
  7. Kwok-Yan Lam (74 papers)
Citations (4)