Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients (2207.09209v4)

Published 19 Jul 2022 in cs.CR and cs.AI

Abstract: Federated learning (FL) is vulnerable to model poisoning attacks, in which malicious clients corrupt the global model via sending manipulated model updates to the server. Existing defenses mainly rely on Byzantine-robust FL methods, which aim to learn an accurate global model even if some clients are malicious. However, they can only resist a small number of malicious clients in practice. It is still an open challenge how to defend against model poisoning attacks with a large number of malicious clients. Our FLDetector addresses this challenge via detecting malicious clients. FLDetector aims to detect and remove the majority of the malicious clients such that a Byzantine-robust FL method can learn an accurate global model using the remaining clients. Our key observation is that, in model poisoning attacks, the model updates from a client in multiple iterations are inconsistent. Therefore, FLDetector detects malicious clients via checking their model-updates consistency. Roughly speaking, the server predicts a client's model update in each iteration based on its historical model updates using the Cauchy mean value theorem and L-BFGS, and flags a client as malicious if the received model update from the client and the predicted model update are inconsistent in multiple iterations. Our extensive experiments on three benchmark datasets show that FLDetector can accurately detect malicious clients in multiple state-of-the-art model poisoning attacks. After removing the detected malicious clients, existing Byzantine-robust FL methods can learn accurate global models.Our code is available at https://github.com/zaixizhang/FLDetector.

A Formal Overview of FLDetector: A Defense Mechanism Against Model Poisoning Attacks in Federated Learning

Federated Learning (FL) is gaining recognition as a promising decentralized learning paradigm, enabling multiple clients to collaboratively train a global machine learning model without sharing their local data. However, FL's distributed nature makes it vulnerable to model poisoning attacks, where malicious clients deliberately corrupt the global model by sending falsified updates. In such a scenario, the integrity and performance of the global model can be severely compromised. In this context, the paper titled "FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients" presents a novel approach to tackle this challenge by detecting and mitigating the impact of malicious clients in federated learning environments.

The proposed methodology, FLDetector, departs from traditional defenses that rely on Byzantine-robust aggregation techniques, which only tolerate a limited number of malicious clients. Instead, FLDetector aims to identify and exclude malicious clients to enable more effective application of existing FL methods that promise robustness given a reduced number of adversaries. The central tenet of FLDetector lies in detecting inconsistencies in model updates submitted by clients across multiple iterations. It operates by predicting a client's model updates based on historical data and subsequently evaluating the consistency of actual updates received from the client. A client is flagged as malicious when their updates significantly deviate from predictions repeatedly over time.

Key Characteristics and Experimental Insights

FLDetector utilizes a theoretically grounded consistency check for model updates leveraging the Cauchy mean value theorem and approximates the necessary Hessian matrix using a limited-memory BFGS (L-BFGS) algorithm. This allows the central server to predict a client's model update in the current iteration, which should ideally be consistent with the observed update from benign clients. If a client's update deviates from the prediction, it indicates possible malicious activity.

For client classification, FLDetector maintains a suspicious score for each client calculated by averaging normalized Euclidean distances between predicted and actual updates over recent iterations. The clustering of these scores via the kk-means algorithm with the gap statistics method enables the identification of anomalous behavior differentiating benign from malicious clients.

The experimental evaluation conducted over benchmark datasets, including MNIST, CIFAR-10, and FEMNIST, and various poisoning attacks, such as untargeted model poisoning, Scaling Attack, DBA, and adaptive attacks tailored to evade FLDetector, demonstrated its efficacy. FLDetector consistently exhibited high detection accuracy rates while maintaining low false positive and negative rates. Utilizing only limited auxiliary knowledge, FLDetector excels in its unsupervised detection approach and significantly enhances the accuracy and security of federated learning frameworks when combined with Byzantine-robust FL methods post detection.

Implications and Prospects for Future Research

The implications of FLDetector extend beyond merely detecting malicious clients. By filtering out adversarial participants proactively, federated learning systems can leverage existing robustness techniques more effectively, thereby ensuring the development of accurate and secure global models. Practically, this approach paves the way for more reliable decentralized applications across varied domains, including smart devices, healthcare, and edge computing.

Future research could address some limitations and explore extensions of FLDetector. While the method efficiently handles traditional federated learning setups, adaptations for vertical federated learning, real-time detection in asynchronous settings, and expansion to other modalities such as textual data remain areas for further exploration. Additionally, enhancing the efficiency and scalability of the detection mechanism and developing mechanisms for automatic recovery of corrupted models after cleansing are promising directions.

In conclusion, FLDetector represents a robust stride forward in strengthening the security of federated learning against poisoning attacks, providing a technical foundation and practical framework to safeguard collaborative learning systems against adversarial threats.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Zaixi Zhang (34 papers)
  2. Xiaoyu Cao (32 papers)
  3. Jinyuan Jia (69 papers)
  4. Neil Zhenqiang Gong (117 papers)
Citations (169)