Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Attack-tolerant Federated Learning via Critical Parameter Analysis (2308.09318v1)

Published 18 Aug 2023 in cs.LG, cs.AI, and cs.CR

Abstract: Federated learning is used to train a shared model in a decentralized way without clients sharing private data with each other. Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server. Existing defense strategies are ineffective under non-IID data settings. This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Parameter Analysis). Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not. Experiments with different attack scenarios on multiple datasets demonstrate that our model outperforms existing defense strategies in defending against poisoning attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Sungwon Han (20 papers)
  2. Sungwon Park (19 papers)
  3. Fangzhao Wu (81 papers)
  4. Sundong Kim (28 papers)
  5. Bin Zhu (218 papers)
  6. Xing Xie (220 papers)
  7. Meeyoung Cha (63 papers)
Citations (7)