Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Threats to Federated Learning: A Survey (2003.02133v1)

Published 4 Mar 2020 in cs.CR, cs.LG, and stat.ML

Abstract: With the emergence of data silos and popular privacy awareness, the traditional centralized approach of training AI models is facing strong challenges. Federated learning (FL) has recently emerged as a promising solution under this new reality. Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries both within and without the system to compromise data privacy. It is thus of paramount importance to make FL system designers to be aware of the implications of future FL algorithm design on privacy-preservation. Currently, there is no survey on this topic. In this paper, we bridge this important gap in FL literature. By providing a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL: 1) poisoning attacks and 2) inference attacks, this paper provides an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks, and discuss promising future research directions towards more robust privacy preservation in FL.

Understanding Threats to Federated Learning: A Survey

The paper "Threats to Federated Learning: A Survey" by Lingjuan Lyu, Han Yu, and Qiang Yang provides a comprehensive examination of security and privacy vulnerabilities in federated learning (FL) frameworks. As FL garners attention as a solution for decentralized data training with privacy considerations, understanding the complexity of potential attack vectors and their corresponding mitigation strategies becomes crucial. This survey bridges a vital gap by categorizing and analyzing various attacks on FL systems, thereby equipping future FL designers and researchers with insights to develop more robust systems.

Key Areas of Federated Learning Vulnerability

The paper identifies two primary categories of attacks on federated learning systems: poisoning attacks and inference attacks. Each type of attack exploits different aspects of the federated learning process and requires distinct defensive approaches.

  1. Poisoning Attacks: This category is further subdivided into data poisoning and model poisoning.
    • In data poisoning, the adversary corrupts the training data to degrade the performance of the global model. Techniques include label-flipping and backdoor attacks.
    • Model poisoning involves directly manipulating the model updates before submission to bias the global model's learning process without necessarily corrupting the input data.
  2. Inference Attacks: These attacks are focused on compromising the privacy of the data used in training.
    • Attacks under this category include inferring class representatives, membership inference, property inference, and recovering training inputs and labels. The survey explores the powerful Deep Leakage from Gradient (DLG) attacks that can recover training samples from shared gradients.

FL Threat Models and Their Implications

The paper systematically explores the landscape of adversaries in federated settings, differentiating between insider vs. outsider threats and semi-honest vs. malicious adversaries. It highlights how these adversaries pose threats during both training and inference phases.

  • Insider vs. Outsider: Insiders, which include compromised servers and participants, represent a more powerful threat than outsiders, as they can directly manipulate or observe the federated learning process.
  • Semi-honest vs. Malicious: While semi-honest adversaries follow the protocol yet attempt to infer sensitive data, malicious ones actively deviate to degrade model integrity or privacy.

Defense Strategies and Research Directions

The survey assesses the current defense mechanisms, such as secure aggregation and differential privacy, stressing the limitations and application constraints, particularly in the context of different federated learning settings like horizontally federated learning (HFL) and vertically federated learning (VFL). There is a need to balance privacy, utility, and performance when deploying these measures.

Notably, the paper calls for more research in several areas:

  • Developing federated learning protocols that minimize information leakage without compromising model performance.
  • Exploring the robustness of federated models against attacks in VFL scenarios.
  • Investigating federated learning with heterogeneous architectures and decentralized models.
  • Enhancing the theoretical understanding of FL threats through interdisciplinary research, possibly using game-theoretic approaches to optimize defensive strategies.

In conclusion, the paper underscores emergent challenges in securing federated learning systems, advocating for rigorous threat assessments and adaptive defense mechanisms. The intricate balance of privacy, utility, and resilience remains a primary concern in the advancement of federated learning technologies, warranting a concerted effort from the research community to fortify emerging federated systems against evolving adversarial threats.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Lingjuan Lyu (131 papers)
  2. Han Yu (218 papers)
  3. Qiang Yang (202 papers)
Citations (408)