Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Blockchain-enabled Trustworthy Federated Unlearning (2401.15917v1)

Published 29 Jan 2024 in cs.LG and cs.CR

Abstract: Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients. It allows central servers to remove historical data effects within the machine learning model as well as address the "right to be forgotten" issue in federated learning. However, existing works require central servers to retain the historical model parameters from distributed clients, such that allows the central server to utilize these parameters for further training even, after the clients exit the training process. To address this issue, this paper proposes a new blockchain-enabled trustworthy federated unlearning framework. We first design a proof of federated unlearning protocol, which utilizes the Chameleon hash function to verify data removal and eliminate the data contributions stored in other clients' models. Then, an adaptive contribution-based retraining mechanism is developed to reduce the computational overhead and significantly improve the training efficiency. Extensive experiments demonstrate that the proposed framework can achieve a better data removal effect than the state-of-the-art frameworks, marking a significant stride towards trustworthy federated unlearning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yijing Lin (11 papers)
  2. Zhipeng Gao (35 papers)
  3. Hongyang Du (154 papers)
  4. Jinke Ren (32 papers)
  5. Zhiqiang Xie (15 papers)
  6. Dusit Niyato (671 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.