Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FLAME: Taming Backdoors in Federated Learning (Extended Version 1) (2101.02281v5)

Published 6 Jan 2021 in cs.CR

Abstract: Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model without having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to backdoor attacks, in which an adversary injects manipulated model updates into the model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors while maintaining the model performance. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME removes backdoors effectively with a negligible impact on the benign performance of the models. Furthermore, following the considerable attention that our research has received after its presentation at USENIX SEC 2022, FLAME has become the subject of numerous investigations proposing diverse attack methodologies in an attempt to circumvent it. As a response to these endeavors, we provide a comprehensive analysis of these attempts. Our findings show that these papers (e.g., 3DFed [36]) have not fully comprehended nor correctly employed the fundamental principles underlying FLAME, i.e., our defense mechanism effectively repels these attempted attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Thien Duc Nguyen (7 papers)
  2. Phillip Rieger (11 papers)
  3. Huili Chen (20 papers)
  4. Hossein Yalame (6 papers)
  5. Helen Möllering (5 papers)
  6. Hossein Fereidooni (14 papers)
  7. Samuel Marchal (12 papers)
  8. Markus Miettinen (14 papers)
  9. Azalia Mirhoseini (40 papers)
  10. Shaza Zeitouni (8 papers)
  11. Farinaz Koushanfar (85 papers)
  12. Ahmad-Reza Sadeghi (66 papers)
  13. Thomas Schneider (53 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.