Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FLock: Defending Malicious Behaviors in Federated Learning with Blockchain (2211.04344v1)

Published 5 Nov 2022 in cs.CR, cs.AI, cs.GT, and cs.LG

Abstract: Federated learning (FL) is a promising way to allow multiple data owners (clients) to collaboratively train machine learning models without compromising data privacy. Yet, existing FL solutions usually rely on a centralized aggregator for model weight aggregation, while assuming clients are honest. Even if data privacy can still be preserved, the problem of single-point failure and data poisoning attack from malicious clients remains unresolved. To tackle this challenge, we propose to use distributed ledger technology (DLT) to achieve FLock, a secure and reliable decentralized Federated Learning system built on blockchain. To guarantee model quality, we design a novel peer-to-peer (P2P) review and reward/slash mechanism to detect and deter malicious clients, powered by on-chain smart contracts. The reward/slash mechanism, in addition, serves as incentives for participants to honestly upload and review model parameters in the FLock system. FLock thus improves the performance and the robustness of FL systems in a fully P2P manner.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nanqing Dong (34 papers)
  2. Jiahao Sun (20 papers)
  3. Zhipeng Wang (43 papers)
  4. Shuoying Zhang (3 papers)
  5. Shuhao Zheng (6 papers)
Citations (4)