Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning (1911.11815v4)

Published 26 Nov 2019 in cs.CR, cs.DC, and cs.LG

Abstract: In federated learning, multiple client devices jointly learn a machine learning model: each client device maintains a local model for its local training dataset, while a master device maintains a global model via aggregating the local models from the client devices. The machine learning community recently proposed several federated learning methods that were claimed to be robust against Byzantine failures (e.g., system failures, adversarial manipulations) of certain client devices. In this work, we perform the first systematic study on local model poisoning attacks to federated learning. We assume an attacker has compromised some client devices, and the attacker manipulates the local model parameters on the compromised client devices during the learning process such that the global model has a large testing error rate. We formulate our attacks as optimization problems and apply our attacks to four recent Byzantine-robust federated learning methods. Our empirical results on four real-world datasets show that our attacks can substantially increase the error rates of the models learnt by the federated learning methods that were claimed to be robust against Byzantine failures of some client devices. We generalize two defenses for data poisoning attacks to defend against our local model poisoning attacks. Our evaluation results show that one defense can effectively defend against our attacks in some cases, but the defenses are not effective enough in other cases, highlighting the need for new defenses against our local model poisoning attacks to federated learning.

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning

In the domain of federated learning (FL), the need to secure collaborative machine learning against adversarial behavior is paramount. The paper by Fang et al. presents a comprehensive paper on the vulnerabilities of Byzantine-robust federated learning methods under local model poisoning attacks. These methods are designed to maintain robustness even when certain client devices, referred to as "workers," behave arbitrarily due to failures or malicious compromises.

Methodology and Key Contributions

The core of the research involves formulating local model poisoning attacks as optimization problems. This approach allows the authors to craft malicious updates from compromised workers, thereby increasing the global model's error rate significantly. The attacks are applied to four recent Byzantine-robust FL aggregation schemes: Krum, Bulyan, trimmed mean, and median.

Key Contributions:

  1. Optimization-Based Attack Formulation: The authors derive attacks as optimization problems targeting the deviation of the global model parameters from their expected paths.
  2. Empirical Validation: The proposed attacks are evaluated across four real-world datasets (MNIST, Fashion-MNIST, CH-MNIST, and Breast Cancer Wisconsin (Diagnostic)) to demonstrate efficacy.
  3. Defense Mechanisms: The paper proposes generalized defenses inspired by existing data poisoning defenses, specifically RONI and TRIM, and evaluates their effectiveness.

Attack Details and Effectiveness

Krum and Bulyan Attacks:

  • For Krum, the proposed attack constructs local models such that the one chosen as the global model in each iteration deviates most from its expected change.
  • The novel method approximates compromised models to be very close, thereby manipulating the selection metric used in Krum and extending similarly to Bulyan.

Trimmed Mean and Median Attacks:

  • The attack leverages approximations where local models sent from compromised devices are determined based on observed behavior of benign device updates.
  • In practice, the crafted attacks significantly increase error rates, e.g., raising the MNIST LR classifier error from 0.14 to 0.80 under Krum.

Generalization of Defenses

Error Rate-Based Rejection (ERR) and Loss Function-Based Rejection (LFR):

  • ERR and LFR propose techniques for detecting and eliminating potentially malicious local models based on their impact on a small validation set's error rate and loss function, respectively.
  • While LFR has shown to outperform ERR in more scenarios, neither defense eliminates the attack vulnerabilities entirely.

Practical and Theoretical Implications

Practical Implications:

  • The research underscores the necessity for robust security measures in FL systems. Current Byzantine-robust methods, while theoretically sound, exhibit high susceptibility to crafted attacks in practical settings.
  • Proposed defenses, although somewhat effective, reveal the need for more advanced and sophisticated mechanisms to safeguard FL against model poisoning.

Theoretical Implications:

  • The paper disrupts the assumption of robust performance guaranteed by asymptotic bounds. By demonstrating significant practical deviations, it calls attention to refining theoretical guarantees that better predict real-world performance.

Future Directions in AI

The vulnerabilities highlighted pave the way for future research that could focus on:

  • Developing aggregation rules that inherently resist optimization-based model poisoning without relying on a posteriori detection and rejection.
  • Exploring adaptive and resilient FL architectures that can dynamically adjust based on detected malicious behavior trends.
  • Incorporating robust optimization techniques that proactively secure model updates, blending adversarial robustness directly into FL training processes.

This substantial body of work moves the community towards more resilient federated learning models and emphasizes the critical intersection of security and distributed machine learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Minghong Fang (34 papers)
  2. Xiaoyu Cao (32 papers)
  3. Jinyuan Jia (69 papers)
  4. Neil Zhenqiang Gong (117 papers)
Citations (921)