Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Achieving Security and Privacy in Federated Learning Systems: Survey, Research Challenges and Future Directions (2012.06810v1)

Published 12 Dec 2020 in cs.CR and cs.AI

Abstract: Federated learning (FL) allows a server to learn a ML model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server. However, FL is not free of issues. On the one hand, the model updates sent by the clients at each training epoch might leak information on the clients' private data. On the other hand, the model learnt by the server may be subjected to attacks by malicious clients; these security attacks might poison the model or prevent it from converging. In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack. Afterwards, we discuss the difficulty of simultaneously achieving security and privacy protection. Finally, we sketch ways to tackle this open problem and attain both security and privacy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. David Sánchez (40 papers)
  2. Alberto Blanco-Justicia (13 papers)
  3. Josep Domingo-Ferrer (41 papers)
  4. Sergio Martínez (13 papers)
  5. Adrian Flanagan (5 papers)
  6. Kuan Eeik Tan (6 papers)
Citations (97)