Papers
Topics
Authors
Recent
Search
2000 character limit reached

Modular Attacker/Defender in Federated Learning

Updated 22 March 2026
  • Modular Attacker/Defender in FL are frameworks that decouple attack and defense functionalities from core training, enabling extensible and testable security strategies.
  • They integrate specialized client-side and server-side hooks using APIs to orchestrate data/model poisoning, robust aggregation, and layered defense interventions.
  • Empirical results show that combining plug-and-play client defenses with server-side techniques can improve accuracy by 10–20% under adversarial conditions.

Modular attacker/defender paradigms in federated learning (FL) systematically separate attack and defense functionalities from the core FL workflow, enabling extensible, compositional, and independently testable strategies for adversarial robustness. In prominent systems such as FedMLSecurity (FedSecurity), the modules FedAttacker and FedDefender instantiate this approach, orchestrating a wide spectrum of attacks and defenses at well-specified integration points without entangling with standard model training logic. The federated learning community has also introduced advanced client-side mechanisms such as FedDefender to complement server-side defenses by fortifying local training against model poisoning, thus enabling layered, modular security architectures across client-server boundaries (Park et al., 2023, Han et al., 2023).

1. Architectural Principles and APIs

Modular attacker/defender systems insert specialized hooks into FL pipelines, intercepting data and model flows at configurable locations. In the FedML ecosystem, the attacker and defender modules instantiate as singletons and register at process startup, exposing APIs for data poisoning, model poisoning, data reconstruction, and various defense interventions. The core FL loop remains unchanged, ensuring reproducibility and compatibility across diverse models (e.g., logistic regression, ResNet, GAN, BERT) and FL optimizers (FedAvg, FedOPT, FedNOVA).

High-level workflow:

  • Client Side:
    • Before local training: FedMLAttacker.poison_data(...) (optional)
    • After local training: upload local weights wtw_\ell^t to server
  • Server Side (per round tt):
  1. Optionally, FedMLAttacker.model_poisoning
  2. FedMLDefender.defend_before_aggregation
  3. Aggregation (robust or default)
  4. FedMLDefender.defend_after_aggregation

Core API plug-points:

Function Input Output
poison_data(dataset) Local dataset Modified/poisoned dataset
poison_model(W_\ell, info) List of client weight tuples Modified list (model poisoned)
defend_before_aggregation List of client updates, global model Filtered/rewt list
defend_on_aggregation List of client updates Aggregated global model weights
defend_after_aggregation Global model Clipped/noised/corrected global model

Flag-based activation (e.g., is_attack_enabled(), is_model_poisoning_attack(), is_defense_enabled()), together with YAML-based configuration, facilitates experiment reproducibility and modular development (Han et al., 2023).

2. Attack Implementation in Modular Frameworks

FedMLAttacker supports a comprehensive set of attack strategies, deployed at client and server hooks. Key modes:

  • Model-Poisoning (Byzantine)
    • Zero Mode: Set client updates to zero
    • Random Mode: wiU(α,+α)w_i \sim \mathcal{U}(-\alpha, +\alpha), coordinate-wise
    • Flipping Mode: wiwgold+(wgoldwi)w_i \gets w_g^{\rm old} + (w_g^{\rm old} - w_i) (update inversion)
    • Model Replacement/Backdoor: Optimize a delta δ\delta:

    minδ  λδ22+L(θ+δ;Dtarget)s.t. δpϵ\min_{\delta}\;\lambda\|\delta\|_2^2 + L(\theta+\delta;\,D_{\rm target})\quad\text{s.t. }\|\delta\|_p\le\epsilon

  • Data-Poisoning

    • Label-Flipping: For poisoned sample (x,y)(x, y), set yctgty\gets c_{\rm tgt} if y=csrcy=c_{\rm src}
    • Edge-case Backdoor
  • Data-Reconstruction (Passive Adversary)
    • Deep Leakage from Gradients, Inverting Gradient, Label Revelation

Configurable via YAML, e.g.,

1
2
3
4
attack:
  enable_attack: True
  type: random_byzantine
  fraction: 0.1    # Malicious clients
and corresponding runtime path selection.

3. Defense Methodologies: Server-Side and Client-Side

FedMLDefender orchestrates defense layers at multiple points:

  • Before-Aggregation:
    • Krum / m-Krum: Select update(s) with minimal summed Euclidean distance to others
    • Trimmed Mean and Median: Coordinate-wise filtering for robustness
    • CClip: Clipping by coordinate
    • Norm Clipping: 2\ell_2-norm threshold
  • On-Aggregation:
    • Robust Federated Aggregation (RFA): wg=argminwiwgi2w_g = \arg\min_{w}\sum_i\|w-g_i\|_2
    • Robust Learning Rate
  • After-Aggregation:
    • CRFL: Clip wg2τ\|w_g\|_2 \leq \tau, then add Gaussian noise

Configuration example:

1
2
3
4
defense:
  enable_defense: True
  type: m_krum
  m: 5

FedDefender operates entirely on the client and is designed to defend against untargeted (accuracy-degrading) model poisoning (Park et al., 2023):

  • Attack-Tolerant Local Meta Update: Simulates noisy updates via kk-NN label flipping, runs an inner (perturbation) and an outer (meta) optimization. For each mini-batch:
    • Perturbation step: θ~k=θkηθkLperturb(θk)\tilde{\theta}_k = \theta_k - \eta \nabla_{\theta_k}\mathcal{L}_{\rm perturb}(\theta_k), where y~i\tilde{y}_i are synthesized via kk-NN.
    • Meta-step: θkθkηθkLmeta(θ~k)\theta_k \leftarrow \theta_k - \eta \nabla_{\theta_k}\mathcal{L}_{\rm meta}(\tilde{\theta}_k) using clean data.
  • Attack-Tolerant Global Knowledge Distillation: Filters global model outputs by cosine similarity, refines the target via interpolation, distills to an auxiliary head and then self-distills to the full model.
    • Refined target: y^=(1α)y+αFθ(x,τ)\hat{y} = (1-\alpha) y + \alpha F_{\theta}(x,\tau), where α(x)=cos(y,Fθ(x))\alpha(x)=\cos(y, F_\theta(x)).
    • The total loss combines the standard cross-entropy and KD regularization: Ltotal=LCE+λLKD\mathcal{L}_{\rm total} = \mathcal{L}_{\rm CE} + \lambda \mathcal{L}_{\rm KD}.

FedDefender is "plug-and-play," not requiring server code changes, and composes with any existing server-side aggregation defense.

4. Algorithmic Pseudocode and Hyperparameterization

FedDefender's core update (summarized):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Algorithm: FedDefender (Client k, one communication round)
Inputs: Global model θ, local data Dₖ, hyperparams {η, τ, k, λ}
Outputs: Local update Δθₖ
1.  θₖ  θ
2.  Detach gradients on θ (for KD)
3.  For each mini-batch 𝒳  Dₖ:
      # Step 1: Local Meta Update
      Build perturbed batch ˜𝒳 via k-NN label flips
      Compute L_perturb = avg H(˜y, f_{θₖ}(x))
      ˜θₖ  θₖ  η L_perturb
      Compute L_meta = avg H(y, f_{˜θₖ}(x))
      θₖ  θₖ  η L_meta
      # Step 2: Global Knowledge Distillation
      For each (x, y):   # compute softened/sharpened targets
          p = F_θ(x, τ)
          α = cosine(y, p)
          ŷ = (1α)y + α p
      L_global = avg H(ŷ, f_{φₖ}(x))
      L_self = avg KL(f_{θₖ}(x, τ) || f_{φₖ}(x, τ))
      L_KD = L_global + L_self
      L_CE = avg H(y, f_{θₖ}(x))
      L_total = L_CE + λ L_KD
      θₖ  θₖ  η L_total
4. Return Δθₖ = θₖ  θ

Recommended parameter ranges:

  • Learning rate η\eta: 0.005–0.02; match server's FedAvg
  • Inner loop meta-step: usually η\eta, optionally 0.5η0.5\eta
  • kk (label flip): 5–20 (yields 5–20% synthetic noise)
  • τ\tau (distillation temp.): 2–5
  • KD reg. λ\lambda: 1.0 (tune 0.5–2.0)
  • Auxiliary head: after 2nd/3rd ResNet block
  • Batch size: 32–128

Overhead is approximately 1.5×1.5\times2×2\times standard local training.

5. Experimental Results and Benchmarks

FedMLSecurity and FedDefender supply quantitative benchmark results on various datasets (CIFAR-10/100, FEMNIST, TinyImageNet, Shakespeare, PubMedQA, 20News) and model types (ResNet, CNN, RNN, BERT, Pythia-1B).

Server-side defenses (FedMLDefender) demonstrate:

  • On CIFAR-10 (10 clients, 10% malicious, non-IID, ResNet20):
    • FedAvg (no attack): 82.4% test accuracy
    • + Label-flip: 75.8%
    • + Random Byzantine: 24.7%
    • + Random Byzantine + m-Krum(5): 80.3%
    • + Random Byzantine + RFA: 60.1%
    • + Random Byzantine + Foolsgold: 65.5%
  • On LLMs (PubMedQA, Pythia-1B):
    • FedAvg (no attack): test loss 1.20
    • + Random Byzantine: 2.95
    • + m-Krum(m=2): 1.33

FedDefender (client-side) yields:

  • Label-flip attack, 20% malicious clients, non-IID data, CIFAR-10:
    • FedAvg + FedDefender: 78.2% (vs. 68.8% baseline, +9.4%)
    • Multi-Krum + FedDefender: 81.9% (vs. 73.1%, +8.8%)
  • Informed attacks (LIE, STAT-OPT, DYN-OPT):
    • Multi-Krum only (LIE): 41.5%, +FedDefender: 46.3% (+4.8%)
    • Residual-Base (STAT-OPT): 70.2%, +FedDefender: 77.8% (+7.6%)

Consistent improvements of 10–20% (last and best) accuracy reported across tested server-side defenses and datasets. Accuracy trajectory plots indicate both higher peak and smoother learning under heavy attack when FedDefender is layered atop existing defenses.

6. Usage, Extension, and Limitations

  • Integration: FedDefender operates in Step 2 ("Local model train") of FL pipelines. No server-side code changes required. Delta updates Δθk\Delta \theta_k sent to the server can be robustly aggregated as usual.
  • Extensibility: To add a new attack or defense, subclass FedMLAttacker/FedMLDefender and implement the specific hook. Register the strategy in the module boolean selectors.
  • Configuration: All components use a unified YAML configuration (attack/defense type, hyperparams).
  • Limitations:
    • FedDefender assumes a trusted server for correct aggregation.
    • Designed for untargeted (not backdoor) poisoning; does not explicitly counter backdoors.
    • Overhead is 1.5x–2x standard client update cost.
    • Effectiveness degrades with attack rates >40%, or if hyperparameters are not carefully selected.
  • Benchmark usage: Initial validation recommended with small-scale settings and fixed seeds; scaling to hundreds of clients or LLMs advised only after validation and profiling.

7. Context and Impact

The modular attacker/defender paradigm enables reproducible evaluation of federated learning resilience, accelerates benchmarking of new attacks and defenses, and promotes composability for multi-layered security. By decoupling attack and defense from the FL core, frameworks such as FedMLSecurity facilitate easy adaptation to new models or system layouts, lowering the barrier for robust FL research. The introduction of client-side defense mechanisms such as FedDefender advances the field by offering plug-in robustness without the need for server-side modifications, complementing the previously server-centric defense ecosystem and providing empirical accuracy improvements against a range of threat scenarios (Park et al., 2023, Han et al., 2023).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Modular Attacker/Defender (FedAttacker/FedDefender).