Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning (2108.13888v1)

Published 31 Aug 2021 in cs.CR and cs.CL

Abstract: \textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers. When the triggers are activated, even the fine-tuned model will predict pre-defined labels, causing a security threat. These backdoors generated by the poisoning methods can be erased by changing hyper-parameters during fine-tuning or detected by finding the triggers. In this paper, we propose a stronger weight-poisoning attack method that introduces a layerwise weight poisoning strategy to plant deeper backdoors; we also introduce a combinatorial trigger that cannot be easily detected. The experiments on text classification tasks show that previous defense methods cannot resist our weight-poisoning method, which indicates that our method can be widely applied and may provide hints for future model robustness studies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Linyang Li (57 papers)
  2. Demin Song (11 papers)
  3. Xiaonan Li (48 papers)
  4. Jiehang Zeng (5 papers)
  5. Ruotian Ma (19 papers)
  6. Xipeng Qiu (257 papers)
Citations (119)

Summary

We haven't generated a summary for this paper yet.