Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Data Poisoning Attacks Against Federated Learning Systems (2007.08432v2)

Published 16 Jul 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server. However, the distributed nature of FL gives rise to new threats caused by potentially malicious participants. In this paper, we study targeted data poisoning attacks against FL systems in which a malicious subset of the participants aim to poison the global model by sending model updates derived from mislabeled data. We first demonstrate that such data poisoning attacks can cause substantial drops in classification accuracy and recall, even with a small percentage of malicious participants. We additionally show that the attacks can be targeted, i.e., they have a large negative impact only on classes that are under attack. We also study attack longevity in early/late round training, the impact of malicious participant availability, and the relationships between the two. Finally, we propose a defense strategy that can help identify malicious participants in FL to circumvent poisoning attacks, and demonstrate its effectiveness.

Citations (565)

Summary

  • The paper reveals that targeted data poisoning attacks can significantly reduce model accuracy even with only 2% malicious participants.
  • Experiments using CIFAR-10 and Fashion-MNIST show that late-round poisoning leads to lasting impacts on targeted classes.
  • The study proposes a detection strategy using PCA to differentiate malicious updates and strengthen defense in decentralized systems.

Analysis of Data Poisoning Attacks on Federated Learning Systems

The paper under review investigates vulnerabilities in Federated Learning (FL) systems, specifically focusing on data poisoning attacks. Federated Learning, a prominent decentralized training paradigm, aims to enhance privacy by retaining data on local devices while sharing only model updates with a central server. However, the distributed nature introduces susceptibility to malicious participants who might send poisoned updates to degrade the global model's performance.

Research Focus and Findings

This paper scrutinizes targeted data poisoning attacks, where a subset of participants, by utilizing mislabeled data, can significantly diminish the global model’s classification accuracy and recall. Through experiments with CIFAR-10 and Fashion-MNIST, the paper demonstrates the feasibility of such attacks even with a minimal percentage of malicious participants (as low as 2%). Notably, the attack's impact is disproportionately higher on specific targeted classes, indicating its potential for targeted disruption while maintaining overall stealth.

Key observations include:

  • Attack Efficacy: The attack’s effectiveness correlates with the proportion of malicious participants, showing significant utility reduction in the global model even with low malicious participant ratios.
  • Impact Longevity: Results suggest that early-round attacks typically do not have a lasting impact, as the model can recover; however, late-round poisonings have enduring effects.
  • Participant Availability: Increasing malicious participant selection rates amplifies attack severity, particularly in later rounds of training.

Defense Mechanism

To counteract these vulnerabilities, the authors propose a detection strategy allowing the FL aggregator to identify malicious updates. This method leverages the distinct characteristics of updates originating from malicious participants. By extracting relevant update subsets and employing PCA for dimensionality reduction, the strategy successfully distinguishes between malicious and benign contributions.

Implications and Future Directions

This work has profound implications for both theoretical and practical deployment of federated systems. It emphasizes the necessity for robust defense mechanisms against adversarial attacks which can stealthily undermine model integrity. Further, it propels inquiries into more sophisticated adversarial strategies and the development of comprehensive defensive measures that go beyond traditional anomaly detection schemes.

Future work could explore extending the proposed defense to resist more complex and adaptive poisoning tactics, including those that evolve with the learning process. Moreover, the generalizability of these findings across different datasets and architectures presents fertile ground for continued research.

By advancing understanding of these adversarial dynamics, the paper makes a valuable contribution to the ongoing discourse on secure federated systems, encouraging further exploration into securing distributed learning paradigms against evolving threats.

Youtube Logo Streamline Icon: https://streamlinehq.com