Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Detect Malicious Clients for Robust Federated Learning (2002.00211v1)

Published 1 Feb 2020 in cs.LG, cs.CR, and stat.ML

Abstract: Federated learning systems are vulnerable to attacks from malicious clients. As the central server in the system cannot govern the behaviors of the clients, a rogue client may initiate an attack by sending malicious model updates to the server, so as to degrade the learning performance or enforce targeted model poisoning attacks (a.k.a. backdoor attacks). Therefore, timely detecting these malicious model updates and the underlying attackers becomes critically important. In this work, we propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates using a powerful detection model, leading to targeted defense. We evaluate our solution in both image classification and sentiment analysis tasks with a variety of machine learning models. Experimental results show that our solution ensures robust federated learning that is resilient to both the Byzantine attacks and the targeted model poisoning attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Suyi Li (26 papers)
  2. Yong Cheng (58 papers)
  3. Wei Wang (1793 papers)
  4. Yang Liu (2253 papers)
  5. Tianjian Chen (22 papers)
Citations (197)

Summary

Learning to Detect Malicious Clients for Robust Federated Learning

The paper, "Learning to Detect Malicious Clients for Robust Federated Learning," provides a comprehensive paper into the vulnerabilities of Federated Learning (FL) against adversarial attacks and proposes a novel framework to enhance its robustness. The central challenge tackled is the detection of malicious model updates from rogue clients that can potentially compromise the learning performance or initiate targeted model poisoning attacks, known as backdoor attacks.

Core Contributions

The authors present a new framework based on spectral anomaly detection, aimed at identifying and removing malicious model updates in FL. The approach utilizes a powerful detection model to enable robust federated learning that is resilient to both Byzantine attacks and targeted poisoning attacks. Key contributions include:

  1. Spectral Anomaly Detection Framework: By employing spectral anomaly detection, the central server can detect abnormal model updates by analyzing their low-dimensional embeddings. In this latent space, essential features are retained, and noisy features are eliminated, making it easier to differentiate between standard and malicious updates.
  2. Robustness Across Diverse Tasks: The framework is evaluated on image classification and sentiment analysis tasks using various machine learning models, such as logistic regression, convolutional neural networks, and recurrent neural networks. These empirical studies demonstrate the efficacy of the proposed solution across different data distributions and attack scenarios.
  3. Unsupervised and Semi-Supervised Settings: The novel detection framework operates effectively under both unsupervised and semi-supervised settings. This adaptability is significant in FL scenarios where malicious updates are unknown and cannot be accurately predicted.
  4. Dynamic Thresholding: The approach integrates variational autoencoder models with dynamic thresholding, where detection thresholds are determined post-update submissions from all clients, preventing attackers from preemptively learning the detection mechanism.

Numerical Results

Experimental results highlight strong performance under various adversarial scenarios. The spectral anomaly detection framework maintains model accuracy close to ideal conditions without attacks, outperforming existing Byzantine-tolerant strategies like GeoMed and Krum under both untargeted and targeted attack settings. The detection mechanism delivers high F1-Scores, underscoring its capability to accurately differentiate between malicious and benign updates.

Implications and Future Work

The implications of this research are notable for practical FL deployment, emphasizing the need for efficient detection of malicious clients to uphold model integrity and performance. The technique's reliance on spectral embeddings offers a promising direction to secure distributed machine learning systems against evolving adversarial attacks. Future developments may explore broader applications of this framework to cover more complex models and optimize feature representations.

In closing, this paper signifies a crucial advancement in fortifying federated learning systems, enriching both theoretical and practical aspects of distributed learning to ensure robustness against adversaries leading to more reliable deployment in privacy-sensitive domains. Scholars and practitioners are encouraged to build upon this foundational work to both expand its applications and improve its detection fidelity.