Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks (2302.01677v2)

Published 3 Feb 2023 in cs.LG and cs.CR

Abstract: In this work, besides improving prediction accuracy, we study whether personalization could bring robustness benefits to backdoor attacks. We conduct the first study of backdoor attacks in the pFL framework, testing 4 widely used backdoor attacks against 6 pFL methods on benchmark datasets FEMNIST and CIFAR-10, a total of 600 experiments. The study shows that pFL methods with partial model-sharing can significantly boost robustness against backdoor attacks. In contrast, pFL methods with full model-sharing do not show robustness. To analyze the reasons for varying robustness performances, we provide comprehensive ablation studies on different pFL methods. Based on our findings, we further propose a lightweight defense method, Simple-Tuning, which empirically improves defense performance against backdoor attacks. We believe that our work could provide both guidance for pFL application in terms of its robustness and offer valuable insights to design more robust FL methods in the future. We open-source our code to establish the first benchmark for black-box backdoor attacks in pFL: https://github.com/alibaba/FederatedScope/tree/backdoor-bench.

Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks

The paper "Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks" presents a comprehensive paper on the interplay between personalized federated learning (pFL) and the robustness against backdoor attacks. This research addresses a critical security challenge in federated learning (FL) by evaluating how pFL, a method designed to handle data heterogeneity, impacts the system's vulnerability to backdoor manipulations.

Key Study Components and Findings

The paper systematically evaluates four prominent backdoor attack methods against six pFL strategies using datasets like FEMNIST and CIFAR-10, resulting in an extensive exploration entailing 600 experiments. Key insights from the investigation include:

  • Robustness of Partial Model-Sharing Methods: The research reveals that pFL methods with partial model-sharing, such as FedRep and FedBN, offer superior robustness against backdoor attacks compared to those employing full model-sharing. For instance, FedRep effectively reduces the attack success rate to below 10%.
  • Influence of Personalization Degree: The degree of personalization emerges as a significant determinant of robustness. pFL approaches allowing greater model personalization essentially prevent the transfer of backdoor features across clients, enhancing defensive capabilities.
  • Comparison with Defense Techniques: The paper juxtaposes the robustness of pFL methods against traditional FL defense strategies, such as Krum and norm clipping. The findings suggest that certain pFL methods not only match but occasionally surpass these defenses, maintaining high prediction accuracy while mitigating backdoor risks.

Ablation Studies and Theoretical Contributions

Ablation studies shed light on why partial model-sharing mechanisms such as FedBN and FedRep yield robust models. For FedBN, the preservation of local BN layer differences across clients is instrumental in hindering backdoor propagation. In the case of FedRep, the independent training of local linear classifiers effectively isolates backdoor influences. Conversely, full model-sharing approaches like Ditto and FedEM exhibit vulnerabilities due to their reliance on a shared global model.

Proposed Defense Method: Simple-Tuning

Drawing from these findings, the authors propose a lightweight defense mechanism termed Simple-Tuning. This method involves the reinitialization and focused retraining of the linear classifier post-training, yielding a significant boost in robustness without extensive computational overhead. The empirical results demonstrated effective defense against backdoor attacks while preserving accuracy, especially when compared to traditional fine-tuning methods.

Implications and Future Directions

The research not only provides empirical guidance on deploying pFL methods with enhanced security guarantees but also stresses the need for further exploration into designing robust FL systems. The findings underline the efficacy of partial model-sharing as a defensive strategy, which can be extended to other FL applications where data privacy and integrity are paramount.

Future investigations could delve into developing more advanced backdoor attack vectors to test the limits of current defenses or explore how the insights from Simple-Tuning can integrate into existing FL frameworks to systematically fortify against diverse security threats. The paper’s emphasis on open-sourcing its framework paves the way for continued research and innovation in the field of federated learning security.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zeyu Qin (16 papers)
  2. Liuyi Yao (19 papers)
  3. Daoyuan Chen (32 papers)
  4. Yaliang Li (117 papers)
  5. Bolin Ding (112 papers)
  6. Minhao Cheng (43 papers)
Citations (21)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com