Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Scalable Federated Unlearning via Isolated and Coded Sharding (2401.15957v1)

Published 29 Jan 2024 in cs.LG, cs.AI, and cs.CR

Abstract: Federated unlearning has emerged as a promising paradigm to erase the client-level data effect without affecting the performance of collaborative learning models. However, the federated unlearning process often introduces extensive storage overhead and consumes substantial computational resources, thus hindering its implementation in practice. To address this issue, this paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing. We first divide distributed clients into multiple isolated shards across stages to reduce the number of clients being affected. Then, to reduce the storage overhead of the central server, we develop a coded computing mechanism by compressing the model parameters across different shards. In addition, we provide the theoretical analysis of time efficiency and storage effectiveness for the isolated and coded sharding. Finally, extensive experiments on two typical learning tasks, i.e., classification and generation, demonstrate that our proposed framework can achieve better performance than three state-of-the-art frameworks in terms of accuracy, retraining time, storage overhead, and F1 scores for resisting membership inference attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yijing Lin (11 papers)
  2. Zhipeng Gao (35 papers)
  3. Hongyang Du (154 papers)
  4. Dusit Niyato (671 papers)
  5. Gui Gui (4 papers)
  6. Shuguang Cui (275 papers)
  7. Jinke Ren (32 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.