Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Robust Federated Learning against both Data Heterogeneity and Poisoning Attack via Aggregation Optimization (2211.05554v2)

Published 10 Nov 2022 in cs.LG, cs.CV, and cs.DC

Abstract: Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning (FL) systems. While both of them have attracted great research interest with specific strategies developed, no known solution manages to address them in a unified framework. To universally overcome both challenges, we propose SmartFL, a generic approach that optimizes the server-side aggregation process with a small amount of proxy data collected by the service provider itself via a subspace training technique. Specifically, the aggregation weight of each participating client at each round is optimized using the server-collected proxy data, which is essentially the optimization of the global model in the convex hull spanned by client models. Since at each round, the number of tunable parameters optimized on the server side equals the number of participating clients (thus independent of the model size), we are able to train a global model with massive parameters using only a small amount of proxy data (e.g., around one hundred samples). With optimized aggregation, SmartFL ensures robustness against both heterogeneous and malicious clients, which is desirable in real-world FL where either or both problems may occur. We provide theoretical analyses of the convergence and generalization capacity for SmartFL. Empirically, SmartFL achieves state-of-the-art performance on both FL with non-IID data distribution and FL with malicious clients. The source code will be released.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yueqi Xie (22 papers)
  2. Weizhong Zhang (40 papers)
  3. Renjie Pi (37 papers)
  4. Fangzhao Wu (81 papers)
  5. Qifeng Chen (187 papers)
  6. Xing Xie (220 papers)
  7. Sunghun Kim (44 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.